Tay: The Nazi A.I of Microsoft


The technological development is one of this topics that has caused fear and insecurity since the beginning of time. Nowadays one the most plausible developments that the humanity is reaching is the Artificial Intelligence. The IA has filled innumerable pages of magazines and books and minutes of movies and series with the fear of becoming the end of humanity . Actually, in my experience, every time  I talk of AI in the image of Arnold Schwarzenegger with a red eye appears in the back of my mind .

Well, I come today to talk about an event that has actually given a lot to talk since it appeared to be the materialization of this fears. This is the launching of Tay, the artificial intelligence that that Microsoft introduced in a Twitter account the 23 of May of this year. This AI was supposed to learn and collect information and generate opinions form the interactions with other human beings. However, the outcome of this experiment came up completely  wrong.

In less than 24 hours Tay went form a nice and human loving AI to a sexiest and racist being that approved Hitlers ideology. All in less than 24 hours. Of course the twitter account was shut down after this shocking metamorphosis.


The question is what happened here?

The problem is that Tay was programmed  to learn form repetition, only having the definition of over two million words as a sort of initial database, and therefore  as Ryan Mathew says She doesn’t know what a Hitler is, or a feminist. She just sees ‘noun, verb, adverb, adjective “ . Therefore  Tay would incorporate the information obtained as ideas of her own without any kind of consideration over it.  The problem of this system is that it lacked any type of distinction between sources, and therefore was unable of distinguishing between valid and invalid information.

Putting it in a different way, Microsoft created a tool that learned form the internet trusting that the internet would behave nicely and correctly. As we know, the internet can be a wild place, and therefore, trolls appeared.

The trollers, in their task of being wonderful human beings helping the community, started posting misleading information tagging Tay to it. And so the transformation began. Say transformed form an AI that wanted to extend the national puppy day to every day to what at the end was a caricature of the internet trolling. The trollers would inculcate any type of opinion to the AI, they even made her say that the PS4 was better than the Xbox.

As  Ryan Matthew Pierson said Tay, as any other kid needs good teachers, and the internet is not the best of them “Unfortunately for Microsoft, it didn’t have safeguards in place to give Tay guidance of right aNnd wrong. It allowed the Internet to do that, and Internet is rarely as cooperative or as kind as we wish it was.”.

tay2b (1).jpg


After this problems, Brandon Wirtz proposes BroadListening.com as an initial solution for the problem faced by Tay. Board listening is a system that “analyzes micro-decisions in the word selections”“analyzes micro-decisions in the word selections” . This way the program is able to write twits just as Tay but is also say how is he acting, and if things are starting to get weird.

Even though the terrible failure that Tay supposes, it is an example of the level of  development that the Artificial Intelligence industry is getting to. For me, the problem is that we are trying to create the AI in our image and likeness  instead of trying to develop something different.

It is true that this the Artificial Intelligence is being created, among other things, to interact with human beings, but this doesn’t mean that the AI needs to reassemble humanity. We have many things that are positive such as creativity or empathy and we are capable of doing marvelous things such as painting or playing music and many other things that computers are absolutely unable to even reach. Things that we should try to inculcate to the AI’s.

On the other hand, humans have also done terrible stuff, we are capable of trowing a nuclear bomb in Hiroshima or the Holocaust. We are not perfect, in fact we have many problems, and in a way this was shown by Tay, because, even if the popular intention was trolling, those are arguments that have been or are valid somewhere in the world

Furthermore I think that, nowadays is impossible to replicate human thought in an Artificial intelligence since we aren’t capable of self-understanding what humanity is and how does it work. Philosophy, Sociology, Anthropology Economy, have been studying the human behavior and mind for centuries and have not been able to reach any form of universal conclusion.

Therefore, I think that Artificial Intelligence should not learn how to be human by humans. I think that AI should learn to treat with humanity of humanity. For example, going back to the case of Tay. Instead of making Tay face and learn from human interaction by talking to people directly, Tay should have learned opinions form the interactions between humans, for example a database such as a political forum, subtracting which are the average arguments, which ones are the most extremists etc…

At the end is about recognizing the differences between an AI and the human brain. And apparently is also about not creating a Nazi Skynet.










  1. I remember when this happened with Tay, nice job bringing it up for this post. It really goes to show how sometimes all that collaboration that social networks allow for can really be used for less positive things. Trolls are everywhere, and it does not take much for them to come together to ruin a good idea. I agree with you completely; I don’t think future AI should learn through direct interaction with people, especially not via social media. But there are many challeneges ahead.

  2. Great article, I remember when Tay went haywire and Microsoft had to take her down. I don’t think it’s a bad idea to have AI learn by human interactions, we just have to program a way for AI to distinguish valid and invalid interactions like you said. This might be impossible to do completely, but some degree of it could of definitely helped Tay. But for the most part, I definitely agree Tay should of been looking at interactions between humans instead of taking advice directly from trolls. Additionally, AI is already creating art and music, but whether or not we will ever program AI with empathy and human creativity remains unknown.

  3. Companies should learn that when they explicitly hand stuff like this over to the Internet crowds, they are inevitably going to try to cause some mischief. The “crowd” really only works if they have some sort of shared goal or ethos. Just opening things like this to the world often ends badly. I do think the future of AI is an interesting topic though and one that will be a big deal in coming years.

  4. ikechukwu_28 · ·

    Very interesting topic. I remember when this happened as well, and was very intrigued by the topic. I find it kind of odd that no one at Microsoft set up any safeguards for Tay so that it could somehow filter through all of the information it got from the internet. I find it kind of obvious that when making this AI you would have to account for the many trolls on the internet, and its a little careless that Microsoft didn’t do this.

  5. jagpalsingh03 · ·

    Great topic. I remember when Tay came out and this whole ordeal went down. It was a bad idea from the start and all Microsoft had to do was look at the top reply tweets from other popular twitters to see that the public does when it has the spotlight. I agree with that creating an AI that is essentially human is impossible because what does it even mean to be human? We can’t even answer that question, so how do we implement the answer. And it is possible that humanity is a spectrum not a binary “human or not human” situation. I am interested into where this field of research goes though, as robots, automation, etc become more ubiquitous.

  6. mashamydear · ·

    I love how well this connects to crowdsourcing– Tay is like the product of crowdsourcing gone wrong. Instead of higher-ups sifting through the ideas presented by the crowd, all ideas were represented for better or for worse. I agree with you about having a more hierarchal approach in controlling the information Tay digests, but it’s also pretty cool to think that there’s something out there that can learn simply by analyzing what’s around them (he/she/it).

  7. magicjohnshin1 · ·

    Great read and interesting topic you chose! Love the pictures and comedic nature of the blog. I love the idea of Tay because its so hilarious to see, but at the same time, the things she says are so savage. My friends and I definitely tag each other through SM on tons of things that she says and it blows my mind that we are just following a machine. Given that she learns from repetition, it really speaks to what our world is like today, especially in terms of politics. It’s crazy to be able to think about how to program a machine to think like a human. I really liked your argument of how these AI’s should be programmed with focus on humanity. I definitely agree with this but it does sound extremely difficult. It is definitely easier to work based off of repetitions. But overall, great post and I hope to read more of your work, cheers!

  8. Nice post and great topic choice — an event that I had all but forgotten! My question has always been why it took so long for Microsoft to shut Tay down. This was a pretty big launch for the company and I have to imagine that the team wanted to monitor “her” progress in the real world pretty closely at first.

    Did they wait in hopes that she would learn away from these behaviors? Or did this become a publicity stunt for Microsoft…I certainly am intrigued to see how they’ll employ this technology next.

    I also certainly reflect the thoughts of Professor Kane. The collective internet’s goals are rarely aligned positively, hence we have “Boaty McBoatface”. Maybe postitive AI could work to change that in the future?

  9. Nice post. This is a really interesting topic and beings to touch upon some of the concepts and challenges of machine learning. You have got to ask yourself how smart is the crowd? And should something really learn everything it needs from the internet alone?

%d bloggers like this: