The technological development is one of this topics that has caused fear and insecurity since the beginning of time. Nowadays one the most plausible developments that the humanity is reaching is the Artificial Intelligence. The IA has filled innumerable pages of magazines and books and minutes of movies and series with the fear of becoming the end of humanity . Actually, in my experience, every time I talk of AI in the image of Arnold Schwarzenegger with a red eye appears in the back of my mind .
Well, I come today to talk about an event that has actually given a lot to talk since it appeared to be the materialization of this fears. This is the launching of Tay, the artificial intelligence that that Microsoft introduced in a Twitter account the 23 of May of this year. This AI was supposed to learn and collect information and generate opinions form the interactions with other human beings. However, the outcome of this experiment came up completely wrong.
In less than 24 hours Tay went form a nice and human loving AI to a sexiest and racist being that approved Hitlers ideology. All in less than 24 hours. Of course the twitter account was shut down after this shocking metamorphosis.
The question is what happened here?
The problem is that Tay was programmed to learn form repetition, only having the definition of over two million words as a sort of initial database, and therefore as Ryan Mathew says “She doesn’t know what a Hitler is, or a feminist. She just sees ‘noun, verb, adverb, adjective “ . Therefore Tay would incorporate the information obtained as ideas of her own without any kind of consideration over it. The problem of this system is that it lacked any type of distinction between sources, and therefore was unable of distinguishing between valid and invalid information.
Putting it in a different way, Microsoft created a tool that learned form the internet trusting that the internet would behave nicely and correctly. As we know, the internet can be a wild place, and therefore, trolls appeared.
The trollers, in their task of being wonderful human beings helping the community, started posting misleading information tagging Tay to it. And so the transformation began. Say transformed form an AI that wanted to extend the national puppy day to every day to what at the end was a caricature of the internet trolling. The trollers would inculcate any type of opinion to the AI, they even made her say that the PS4 was better than the Xbox.
As Ryan Matthew Pierson said Tay, as any other kid needs good teachers, and the internet is not the best of them “Unfortunately for Microsoft, it didn’t have safeguards in place to give Tay guidance of right aNnd wrong. It allowed the Internet to do that, and Internet is rarely as cooperative or as kind as we wish it was.”.
After this problems, Brandon Wirtz proposes BroadListening.com as an initial solution for the problem faced by Tay. Board listening is a system that “analyzes micro-decisions in the word selections”“analyzes micro-decisions in the word selections” . This way the program is able to write twits just as Tay but is also say how is he acting, and if things are starting to get weird.
Even though the terrible failure that Tay supposes, it is an example of the level of development that the Artificial Intelligence industry is getting to. For me, the problem is that we are trying to create the AI in our image and likeness instead of trying to develop something different.
It is true that this the Artificial Intelligence is being created, among other things, to interact with human beings, but this doesn’t mean that the AI needs to reassemble humanity. We have many things that are positive such as creativity or empathy and we are capable of doing marvelous things such as painting or playing music and many other things that computers are absolutely unable to even reach. Things that we should try to inculcate to the AI’s.
On the other hand, humans have also done terrible stuff, we are capable of trowing a nuclear bomb in Hiroshima or the Holocaust. We are not perfect, in fact we have many problems, and in a way this was shown by Tay, because, even if the popular intention was trolling, those are arguments that have been or are valid somewhere in the world
Furthermore I think that, nowadays is impossible to replicate human thought in an Artificial intelligence since we aren’t capable of self-understanding what humanity is and how does it work. Philosophy, Sociology, Anthropology Economy, have been studying the human behavior and mind for centuries and have not been able to reach any form of universal conclusion.
Therefore, I think that Artificial Intelligence should not learn how to be human by humans. I think that AI should learn to treat with humanity of humanity. For example, going back to the case of Tay. Instead of making Tay face and learn from human interaction by talking to people directly, Tay should have learned opinions form the interactions between humans, for example a database such as a political forum, subtracting which are the average arguments, which ones are the most extremists etc…
At the end is about recognizing the differences between an AI and the human brain. And apparently is also about not creating a Nazi Skynet.