Over the past weeks, we have discussed the effects of artificial intelligence on society, both now and in the future. Another important topic to ponder, research, and discuss is the effect that society has on AI.
I’ll try not to get too Sci-fi-y here or speculate too extremely, but I would like to go into the implications of ethics, or what is understand as ethics, of human beings, on machines, specifically machines powered by artificial intelligence.
What originally drew me to this topic was a WSJ article that was widely interacted with on Twitter this week, which many of us might have read, goes into how Amazon and the Alexa team specifically is reacting to hard hitting questions, relating depression, abuse, and suicide, to name a few examples. The representatives from Amazon emphasize that while the company does not have a legal obligation, it does have an ethical obligation, because of the amount of influence and contact with artificial intelligence that people have.
People are asking far more than playing the new Taylor Swift song or what the weather is from Alexa, the AI behind the echo, the most popular speaker in the smart speaker market and therefore. For each question beckoning the response of “I’m sorry, I don’t understand”, Amazon must tactfully create new responses. While many users may be testing Alexa’s abilities as a joke, the team must work for the cases where users are treating Alexa as a companion.
This example brings up an interesting point of the ethical responsibility as well as the personality of the machine that the president of Alexa from Amazon talked about. Even with the development of machine learning, the machine itself is first and foremost made by humans and for humans. Because of that, the biases and dispositions of its creators are inherently intertwined in its code. Luckily, many people are doing a great deal of research on this and what it means for responsibility of the machines. In fact, there is a whole interdisciplinary research center at New York University aptly named the AI Now Institute focusing on the rights & liberties, labor & automation, bias and inclusion, and safety & critical infrastructure pertaining the widening implementation of artificial intelligence across industries.
AI Now Cofounder Kate Crawford wrote an article, “Artificial Intelligence–With Very Real Biases” published just last week discussing further why AI needs to be checked. The largest and most influential artificial intelligence is being built out by large private and public companies, who do not undergo the same checks and balances that the government would. AI, as we have been discussing, is already widely implemented in areas many are not even yet aware of, and it is here to stay. Artificial intelligence, according to Forbes.com contributor R.L. Adams, is a system that can learn on its own and improve its algorithms. He writes, “True A.I. can improve on past iterations, getting smarter and more aware, allowing it to enhance its capabilities and its knowledge.” This certain independency is what is frightening to people and what makes it so crucial that we work on creation of ground rules now and discussion of machine morality. The concept of machines behaving in a moral (or immoral) way previously has only been talk of science fiction, and thus not on the forefront of thought of research or technology.
The creators of AI both in the present and future must consider not only the benefits to humans but also potential harm to be done. The more powerful these systems get, the more carefully this must be thought through. Even far before the moment of singularity and emergence of superintelligence, where machines intellectually overtake human ability, AI will have the power to create harm. In fact, as discussed in class, Facebook’s algorithms already have caused harm and created complications for users, particularly in its suggestion of friends. Crawford’s concern is that no one will take this issue seriously until something seriously bad happens, comparing this new landscape of machine learning and artificial intelligence to the trust we place in our water sources. We believe it is clean and safe to drink and use it daily, until perhaps a catastrophe like the one in Flint, MI occurs, and it is too late. She points out that in fact, systems using artificial intelligence have already been wrong, and gives the example of use in policing. In Chicago, a system built on biases was used to help police identify at-risk individuals with the end goal of reducing violent crime. A study following the progress of this initiative showed that there was no significant decrease in violent crime, and on top of that, there was an increase in police harassment complaints.
Countless other examples exist where bias and as a result, discrimination are intertwined with algorithms. One example that Google could not seem to explain was why women were far less likely to see an ad for a high-paying job. When people are the involved, which happens to be the majority of the time, there needs to be better testing and monitoring these biases, before deploying on a large scale and allowing the system to get out of hand. The software that goes into AI and even more basic algorithms is objectively collecting data, but not unaffected by human influence and therefore biases.
There is a lot to consider when even beginning to discuss potential ethical issues in artificial intelligence, like how we prevent artificial intelligence from teaching itself into artificial stupidity, as one forum put it. What makes this subject even more complicated is that, as any philosophy class can confirm, there is no one succinct definition or explanation of what ethics or morality is, so who gets to decide? I believe many people should be involved in working to create machines that perhaps, are less biased than us.