What the NBA, Mississippi politics, and Facebook Can Teach Us About Machine Learning

In 2016, the Republican Party gained supermajority in the House of Representatives, and with that, the power threshold to pass revenue and tax-related bills — all because of a man named Mark Tullos and a pile of straws.

Tullos had originally lost the election to 20-year incumbent and Democrat Bo Eaton after a game of straws determined the 2015 winner, but after an appeal to the House and overwhelming vote to unseat Eaton, Tullos took over. While Tullos may have won his appeal, 24 states can still decide the outcome of tied legislative elections by games of chance such as drawing straws or flipping coins — legally — a practice that dates back to Ancient Athens.

Is there something innately wrong with this? Both Eaton and Tullos thought so. Despite his original win, Eaton echoed Tullos’ criticism of the political policy of electoral tiebreak through games of chance, stressing, “It’s wrong — philosophically, morally…It’s archaic, it’s medieval, and it’s wrong.”

 

XXX AP7906260356.JPG S USA NY

If not for a coin toss, Magic Johnson would have probably never ended up on the Lakers.

But games of chance have decided so much throughout history: Portland, Oregon’s name, Secretariat’s owner, which division chose first in the NBA draft — even which Wright brother manned the first powered flight. What’s more is that sortition, the process of selecting political officials at random from a pool of candidates, dates back to Athenian Democracy, where Greeks relied on a randomization device called a kleroterion that utilized colored dice and slots to select government officials. Citizens believed this method captured the true essence of democracy — and beyond politics, sortition is still used today in processes such as the charter school lottery and even the US green card system. It’s random, it’s unbiased, it’s fair, right? What’s so “wrong” about it, then?

To explore this concept further, it may be worthwhile to examine a modern-day sortition of sorts: machine learning. According to Mike Yeomans of Harvard University, machine learning is essentially a branch of statistics designed for a world of big data, constructed primarily through algorithms, which make analytical decisions much faster and at a larger scale than any human-being could. Companies use algorithms for countless processes nowadays (one ex-Google employee has said that everything at the company runs on machine learning).

Consider your credit score, for example, or risk assessments, or even your Netflix “Recommended for you” queue. Through machine learning, companies can find and recruit top talent, target consumers with specific ads, and even tell the difference between real and fake art.

No, machine learning isn’t just making selections randomly, like the kleroterion —

2ce950569ca373709e931f0d7d7bdc8f

We’ve come a long way since the Kleroterion — or have we? (Ancient Agora Museum, Athens)

 algorithms are based on predictive patterns, patterns that are identified through feature extraction (determining which variables to include in the model), regularization (determining how to weight the variables), and cross-validation (making sure the model is accurate). Theoretically, these safeguards protect the integrity of the machine’s decisions, effectively erasing any semblance of chance or error, and many experts argue that algorithms present an unbiased, calculated approach to complex problem-solving. More importantly, they make decisions so that people don’t have to.

Once, experts asserted that machines’ predictive abilities and decision-making rivaled (and even, in many cases, outperformed) that of humans. In 2010, MIT’s Andrew McAfee went as far to almost fully discredit intuition in decision making, suggesting humans serve as a conduit for receiving data decisions, overriding them if necessary, and then feeding data about the override back into the system — effectively educating the machine to operate like the human. But now, after a series of infamous missteps in machine learning (think previous references to Target ads and Netflix recommendations), many experts like mathematician Cathy O’Neil are urging companies to exercise more caution around these algorithms, even as computers get smarter and faster.

Take the more recent Facebook controversy, for example: outlets attribute many of Facebook’s errors (privacy concerns, anti-Semitism, fake news and clickbait, etc) to its newsfeed and ad algorithms. Whereas the algorithm is the operator getting most of the attention and blame, Slate recently argued that despite the algorithm’s “failings,” humans cannot be absolved from assuming some responsibility.

And this is my point — that while, as journalist Will Oremus puts it, “algorithms, in the popular imagination, are mysterious, powerful entities,” their mystery and magical qualities don’t equate to randomness or probability — they are not games of chance, like drawing straws or flipping a coin, and more importantly, they aren’t impartial. Journalist Roman Mars says, “many companies that build and market these algorithms like to talk about how objective they are, claiming they remove human error and bias from complex decision-making. But in reality, every algorithm reflects the choices of its human designer.”

Cf5ZU0KXIAAOCaT
Understanding these key points has huge implications for how we approach machine learning — both its capabilities and successes, as well as its challenges and pitfalls. Human beings (both managers and the techies behind the code alike) must take responsibility for the machine, even more so than they already are. Part of this may be more care around semantics in PR releases about technological mishaps and their causes, or more transparency around a company’s data processes — but ultimately, it all comes down to how we collect, analyze, and use data. Not only should we be mindful of where it’s coming from, its quality, and its limitations, but we should be more willing to question and examine the data on where our algorithms fail and why — after all, none of it is as simple as holding straws in our hands and letting people choose what they please.

 

4 comments

  1. Great post! With so many of our decisions being filtered by anonymous algorithms it’s so easy to think of them as objectively correct and omniscient “beings,” when in reality they are built and designed by flawed and fallible humans. Is it really a step up from randomness if we take the answers generated from human made algorithms as objective truth?

  2. This is a great way to think about it – as machines have more and more ‘responsibility’, who’s liable? It’s something that I’ve considered a lot when it comes to driverless cars. If the machine is doing the navigating, who is insurance covering? Driver? Car manufacturer? Software developer? I’d be nervous to be developing the AI for some of these things until liability is further developed and clarified.

  3. Nice post. We’ll be dealing with some of these very issues in a couple of weeks when we discuss AI in depth. I’ll save my interesting insights until then!

  4. Hi, Emma. This is a very well-planned and organized post. While I generally think that the decision-making algorithms we are using today are an improvement from randomization, I do see your point in how the technology is still flawed. Even as more of the kinks are worked out and the technology gets smarter, I still think there is a need for some human supervision. Call me old-schooled, but there are just some unique aspects of human decision-making, such as emotions and morals, that can be positive factors that can’t be accurately represented in algorithms.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: