Ethics and AI

Over the past weeks, we have discussed the effects of artificial intelligence on society, both now and in the future.  Another important topic to ponder, research, and discuss is the effect that society has on AI.

I’ll try not to get too Sci-fi-y here or speculate too extremely, but I would like to go into the implications of ethics, or what is understand as ethics, of human beings, on machines, specifically machines powered by artificial intelligence. 

What originally drew me to this topic was a WSJ article that was widely interacted with on Twitter this week, which many of us might have read, goes into how Amazon and the Alexa team specifically is reacting to hard hitting questions, relating depression, abuse, and suicide, to name a few examples.  The representatives from Amazon emphasize that while the company does not have a legal obligation, it does have an ethical obligation, because of the amount of influence and contact with artificial intelligence that people have.

People are asking far more than playing the new Taylor Swift song or what the weather is from Alexa, the AI behind the echo, the most popular speaker in the smart speaker market and therefore.  For each question beckoning the response of “I’m sorry, I don’t understand”, Amazon must tactfully create new responses.  While many users may be testing Alexa’s abilities as a joke, the team must work for the cases where users are treating Alexa as a companion.

This example brings up an interesting point of the ethical responsibility as well as the personality of the machine that the president of Alexa from Amazon talked about.  Even with the development of machine learning, the machine itself is first and foremost made by humans and for humans.  Because of that, the biases and dispositions of its creators are inherently intertwined in its code.  Luckily, many people are doing a great deal of research on this and what it means for responsibility of the machines.  In fact, there is a whole interdisciplinary research center at New York University aptly named the AI Now Institute focusing on the rights & liberties, labor & automation, bias and inclusion, and safety & critical infrastructure pertaining the widening implementation of artificial intelligence across industries.  

AI Now Cofounder Kate Crawford wrote an article, “Artificial Intelligence–With Very Real Biases” published just last week discussing further why AI needs to be checked.  The largest and most influential artificial intelligence is being built out by large private and public companies, who do not undergo the same checks and balances that the government would.  AI, as we have been discussing, is already widely implemented in areas many are not even yet aware of, and it is here to stay.  Artificial intelligence, according to Forbes.com contributor R.L. Adams, is a system that can learn on its own and improve its algorithms.  He writes, “True A.I. can improve on past iterations, getting smarter and more aware, allowing it to enhance its capabilities and its knowledge.”  This certain independency is what is frightening to people and what makes it so crucial that we work on creation of ground rules now and discussion of machine morality.  The concept of machines behaving in a moral (or immoral) way previously has only been talk of science fiction, and thus not on the forefront of thought of research or technology.

The creators of AI both in the present and future must consider not only the benefits to humans but also potential harm to be done.  The more powerful these systems get, the more carefully this must be thought through.  Even far before the moment of singularity and emergence of superintelligence, where machines intellectually overtake human ability, AI will have the power to create harm.  In fact, as discussed in class, Facebook’s algorithms already have caused harm and created complications for users, particularly in its suggestion of friends.  Crawford’s concern is that no one will take this issue seriously until something seriously bad happens, comparing this new landscape of machine learning and artificial intelligence to the trust we place in our water sources.  We believe it is clean and safe to drink and use it daily, until perhaps a catastrophe like the one in Flint, MI occurs, and it is too late.  She points out that in fact, systems using artificial intelligence have already been wrong, and gives the example of use in policing.  In Chicago, a system built on biases was used to help police identify at-risk individuals with the end goal of reducing violent crime.  A study following the progress of this initiative showed that there was no significant decrease in violent crime, and on top of that, there was an increase in police harassment complaints.

Countless other examples exist where bias and as a result, discrimination are intertwined with algorithms.  One example that Google could not seem to explain was why women were far less likely to see an ad for a high-paying job.  When people are the involved, which happens to be the majority of the time, there needs to be better testing and monitoring these biases, before deploying on a large scale and allowing the system to get out of hand.  The software that goes into AI and even more basic algorithms is objectively collecting data, but not unaffected by human influence and therefore biases.

There is a lot to consider when even beginning to discuss potential ethical issues in artificial intelligence, like how we prevent artificial intelligence from teaching itself into artificial stupidity, as one forum put it.  What makes this subject even more complicated is that, as any philosophy class can confirm, there is no one succinct definition or explanation of what ethics or morality is, so who gets to decide?  I believe many people should be involved in working to create machines that perhaps, are less biased than us.

Sources:

5 comments

  1. sejackson33 · ·

    This is a very interesting post discussing an important problem with AI. After seeing that tweet about how Alexa responds to sensitive issues like depression or suicidal people, I thought a lot about what exactly would be the correct and most helpful response. I think it is truly hard to discern what would be the best way to teach a machine to deal with these issues. As AI becomes more and more integrated into our lives, I agree that it will be important for many humans to contribute to its intelligence to minimize bias. The creators of these machines and AI devices need to think about potential problems and sensitive questions that are likely to come up.

  2. cgoettelman23 · ·

    This is a super insightful power. I’ve always found devices like Alexa, and even Siri, to be creepy, not cool. I just recently watching “Snowden”, a movie about the massive NSA exposure. In recent weeks, I have become a huge skeptic to both AI and how it is increasingly integrated into our daily lives. It’ll be interesting to see how it develops in the future.

    1. cgoettelman23 · ·

      And by power, I meant post. Sorry!

  3. Nice post! As we saw in the videos we watched for our AI class, I think that new developments will definitely be coming sooner then we think. I recently found out that they are testing Alexa to be able to have it converse with you as if it were a real person. I have tried it out so far by trying out the beta Alexa chatbot and it is really cool. As reliable as I find Alexa to be, I am scared, however, as to how real she will seem when they continue developing it.

  4. I think AI programming will be highly regulated in the future. I do not foresee the government and its constituents accepting a individual progamming an Machine to learn, but not know what bias that person brings to the table. Look at political parties; a person on the left won’t want AI to be programmed by someone on the right and vice versa. It will be interesting to see how it plays out. I think AI is going to hit the masses soon, but I feel regulations will be five years behind.

%d bloggers like this: