I’m a finance major, so I’ve taken a lot of business courses during my time at BC. Coupled with the business core, the liberal arts core, and going abroad for a semester, I’ve never really had much room in my schedule to take a random, interesting class that had nothing to do with my major and didn’t count towards some sort of requirement. But this semester, I thankfully had an open slot, and I decided to take a class called Forensic Mental Health. The professor, Ann Burgess, is a really cool person, and on the first day of class I learned that the character Wendy Carr in the Netflix series “Mindhunter”, is based on her. I had heard of the show but never seen it, so naturally, I immediately reordered my priorities and spent the better part of that first week of classes binge watching every episode.
Now, for those of you who haven’t seen the show, which I recommend you do (as long as you go into in knowing you will see and hear some pretty disturbing content and possibly lose some faith in humanity), “Mindhunter”, which is based on a true crime book, follows two FBI agents in the 1970’s as they attempt to understand how criminals, specifically serial killers, think, in an effort to understand and consequently catch them. By interviewing convicted serial killers, they are able to pioneer the development of modern-day criminal profiling. I’m a big fan of crime shows in general, and it always baffled and amazed me how much information about an unknown offender (often including gender, age range, race, socioeconomic level, relationship status, family situation, criminal background, and more) can often be very accurately predicted solely based on the crime itself. The concept of profiling is pretty amazing. But please excuse my shameless promotion of “Mindhunter”, because I’m not here to talk about criminal profiling, but to discuss a new type of profiling enabled by AI.
Because what happens when we try to understand everything about somebody not based on what they did, but based on what they look like?
(This is the part where I get into what I’m actually blogging about).
On Twitter last week, I came across an article someone had tweeted about a study from Stanford University that found that a computer algorithm can guess whether a person is gay or straight (with frighteningly high accuracy) based only on a picture of their face. Wow, real life gaydar! Cool, right? Not so much. This technology has far-reaching and alarming implications, and when actually put into action, it seems like it pretty definitively crosses the creepy cool line. The harmful applications of this are endless, spouses using the technology on partners they suspect are closeted and the government using it to out and prosecute LGTB people being just two of the risks mentioned in the article. And to make the situation go from bad to worse, the authors of the Stanford University study also noted that similar AI technology can be used to explore correlations between facial features and a whole other range of things, including political views and leanings, psychological conditions, and personality traits. My thoughts on this are fairly concisely summed up with this quote from the article: “If you can start profiling people based on their appearances, then identifying them and doing horrible things to them, that’s really bad.”
There are so many ethical issues with this I barely know where to start. On a base level, technology like this is a huge invasion of privacy. Yes, I know, privacy almost feels like a foreign concept at this point, and a lot of us have had to come to terms, at least somewhat, with the fact that our data is no longer really ours (*cough @Facebook). But this is a whole different animal. This is no longer data that we are putting out there, but detecting something as personal as sexual orientation and sharing that without that person’s consent. If that is information that a person did not intend to share, I believe that is their right to keep it private, and using AI to potentially out people without their authorization seems like a huge overstep.
We are already judged on our looks enough, and I don’t personally want to live in a world where fundamental truths about our identities can be summed up (or at least, attempted to be) not by our actions but just based on what our face looks like. It seems to me that if we go down this road and make an algorithm like this accessible without taking precautions, there is potential for finding ourselves in Orwell’s world of being persecuted for thought crimes, or a scenario like the movie “Minority Report”, where people can be arrested based only on the prediction that they will commit a crime before any action is even taken. Now I’m not suggesting that no software of this kind should ever be developed or used, because the technology exists now and it’s unrealistic to pretend that it can just go away. But now that we’ve reached this point, it is important to tread carefully. As we discussed at length in class this past week, as cool as the possibilities of Artificial Intelligence are, we should all be very concerned with protecting our privacy, and be proactive about preventing the misuse and recognizing the limitations about what AI can do, and what we should be using it for.