Tay the Teenage Girl

On March 23rd Microsoft released Tay the teenage girl AI bot onto the wonderful world of twitter. Tay is described by her creators as follows,  “Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” Microsoft said when they loosed Tay to the wild. “The more you chat with Tay the smarter she gets.” Reads like an idea that sounds great on the chalk boards in the basement of the Microsoft labs, but one that is set up for disaster once it is released to the trolls of twitter. Sure enough the experiment was discontinued after 24 hours due to Tay making anti-semetic, pro-nazi, racist, and sex-obsessed comments. This article in Forbes leaned a little towards the “what happens if AI robots are created and take over the world with these thoughts” angle, but I will focus on the more troubling thought that there are real teenage girls on twitter that are just as impressionable as this AI bot that Microsoft made.

 

 

Screenshot (1086)

This experiment shows how dangerous it can be to be a child growing up in a social media world. Kids are exposed to the most extreme views on things because those are the tweets/pictures/blogs that stir the most controversy and get the most press. The people with moderate thoughts on things do not take to social media and share them so young people assume that the moderate stance does not exist. The article states that this AI bot is simply repeating and reposting what others have said on twitter so we should not worry, but I think that is exactly what highly impressionable kids do. This experiment needs to be more widely shared especially to parents with young children. I do not mean to suggest that my younger cousins that are on social media will grow up into aiti-semites, but I do think they will be desensitized to extremism because of how freely it is exhibited on social media.

I think the real issue is anonymity on social media. People feel safe behind their keyboards at home and are much more aggressive and short tempered with each other.  Also people can publish bold and completely unresearched articles or tweets that can get thousands of impressions all without the concern of journalistic integrity. I see social media as wild and uncensored medium for the craziest people to get their points heard. Sure there is a lot of great content and stories that are shared also, but the ratio does not make me feel comfortable that the next generation of kids are seeing the world through this warped reporting tool. Perhaps these sites can come up with some type of content control for younger users just like other mediums like television and internet providers. I am not sure what a permanent solution to this problem is, but it is very obvious that there is a serious issue after seeing the disaster that was Tay.

 

 

http://www.forbes.com/sites/erikkain/2016/03/24/microsofts-teenage-nazi-loving-ai-is-the-perfect-social-commentary-for-our-times/#233119f5709e

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

6 comments

  1. When I first heard about the incident with Tay I was more concerned about how it looked on Microsoft, but after hearing that she was learning from what the user base around her was doing I was honestly more concerned about society as a whole. Tay was just a reflection of the dregs of social media that interacted with her and I think it provides a lens for us to view our online behavior.

  2. “Also people can publish bold and completely unresearched articles or tweets that can get thousands of impressions all without the concern of journalistic integrity. ” I think you hit the name right on the head here. I think it’s crazy how much of peoples “political opinions” these days come from social media stories that aren’t backed by any sort of data half the time. I worry about how my kids will grow up, when i grew up my parents were major influencers, in this day and age, anyone can influence your kids

  3. Interesting post! I like the comparison you made there with Tay and a child online – I didn’t think of it that when I heard about the incident, but there are definitely some parallels there. It serves as another example of how careful people need to be on the internet both consuming content and making it. It’ll be interesting to see what Microsoft does next and whether other companies will try this again soon or if there are ways to fix it. The sphere of artificial intelligence is definitely growing, but it’s a little concerning given cases like this.

  4. I think this post is extremely important, especially your thoughts on how easily children can get swept up in topics that are too mature of them on social media. My younger sister is exposed to material that I was never allowed to touch when I was her age just because of the overwhelming social media presence in middle and high schools today. I also think it is worth noting that many more children today are exposed to this mature content because of the easy accessibility through the internet, bootleg movies, and popular memes featuring adult content. One of the biggest attractions of social media is to use it without observation or restrictions from one’s parents, but it can also be a large drawback.

  5. Can’t believe Microsoft didn’t see this kind of thing coming. You make a relevant point about impressionable children and how mature content might affect their futures. Ultimately, although the AI was clearly a product of the Twittersphere alone, Microsoft should have predicted its inability to censor. I think it sounds great for social media companies to consider a content control solution, but it might not be monetarily feasible or worthwhile, especially when they already require a minimum age to register. This point doesn’t at all make Microsoft’s mistake justifiable, but parents also have to be aware of social’s implications for their kids and warn them about crazy content just as they would warn them about crazy strangers.

  6. Great post and great topic Sean! I first saw this article come out a couple of days ago and I was blown away by AI intelligence being implemented in such a direct way in society. I was both surprised and not surprised by some of the things the AI said. The main premise is that the more you interact with the machine, the more the machine assimilates and acts like a teenage girl. The unfortunate truth is that people out there, especially the younger kids find it empowering to use racist/misogynistic/anti-Semitic language over the internet. I think people find a sense of anonymity in saying things over social media, which is not true. I am struggling with trying to see both sides of this situation. Obviously they should have taken down Tay due to the comments “she” was producing. However, Tay is a reflection of “her” interactions with other, real people. In playing devil’s advocate I am trying to think if it would have been beneficial to keep Tay up to see how distasteful and corrupt our society is and try to start to improve. By Tay being taken down and reprogramed to not do any of the things it did, are we just perpetuating the problem within society where people are not facing consequences for their actions? Great post!

%d bloggers like this: