Profiling: The Good, The Bad, and the Horribly Concerning

I’m a finance major, so I’ve taken a lot of business courses during my time at BC. Coupled with the business core, the liberal arts core, and going abroad for a semester, I’ve never really had much room in my schedule to take a random, interesting class that had nothing to do with my major and didn’t count towards some sort of requirement. But this semester, I thankfully had an open slot, and I decided to take a class called Forensic Mental Health. The professor, Ann Burgess, is a really cool person, and on the first day of class I learned that the character Wendy Carr in the Netflix series “Mindhunter”, is based on her. I had heard of the show but never seen it, so naturally, I immediately reordered my priorities and spent the better part of that first week of classes binge watching every episode. 

Now, for those of you who haven’t seen the show, which I recommend you do (as long as you go into in knowing you will see and hear some pretty disturbing content and possibly lose some faith in humanity), “Mindhunter”, which is based on a true crime book, follows two FBI agents in the 1970’s as they attempt to understand how criminals, specifically serial killers, think, in an effort to understand and consequently catch them. By interviewing convicted serial killers, they are able to pioneer the development of modern-day criminal profiling. I’m a big fan of crime shows in general, and it always baffled and amazed me how much information about an unknown offender (often including gender, age range, race, socioeconomic level, relationship status, family situation, criminal background, and more) can often be very accurately predicted solely based on the crime itself. The concept of profiling is pretty amazing. But please excuse my shameless promotion of “Mindhunter”, because I’m not here to talk about criminal profiling, but to discuss a new type of profiling enabled by AI. 

Because what happens when we try to understand everything about somebody not based on what they did, but based on what they look like?

(This is the part where I get into what I’m actually blogging about).

On Twitter last week, I came across an article someone had tweeted about a study from Stanford University that found that a computer algorithm can guess whether a person is gay or straight (with frighteningly high accuracy) based only on a picture of their face. Wow, real life gaydar! Cool, right? Not so much. This technology has far-reaching and alarming implications, and when actually put into action, it seems like it pretty definitively crosses the creepy cool line. The harmful applications of this are endless, spouses using the technology on partners they suspect are closeted and the government using it to out and prosecute LGTB people being just two of the risks mentioned in the article. And to make the situation go from bad to worse, the authors of the Stanford University study also noted that similar AI technology can be used to explore correlations between facial features and a whole other range of things, including political views and leanings, psychological conditions, and personality traits. My thoughts on this are fairly concisely summed up with this quote from the article: “If you can start profiling people based on their appearances, then identifying them and doing horrible things to them, that’s really bad.” 

There are so many ethical issues with this I barely know where to start. On a base level, technology like this is a huge invasion of privacy. Yes, I know, privacy almost feels like a foreign concept at this point, and a lot of us have had to come to terms, at least somewhat, with the fact that our data is no longer really ours (*cough @Facebook). But this is a whole different animal. This is no longer data that we are putting out there, but detecting something as personal as sexual orientation and sharing that without that person’s consent. If that is information that a person did not intend to share, I believe that is their right to keep it private, and using AI to potentially out people without their authorization seems like a huge overstep. 

We are already judged on our looks enough, and I don’t personally want to live in a world where fundamental truths about our identities can be summed up (or at least, attempted to be) not by our actions but just based on what our face looks like. It seems to me that if we go down this road and make an algorithm like this accessible without taking precautions, there is potential for finding ourselves in Orwell’s world of being persecuted for thought crimes, or a scenario like the movie “Minority Report”, where people can be arrested based only on the prediction that they will commit a crime before any action is even taken. Now I’m not suggesting that no software of this kind should ever be developed or used, because the technology exists now and it’s unrealistic to pretend that it can just go away. But now that we’ve reached this point, it is important to tread carefully. As we discussed at length in class this past week, as cool as the possibilities of Artificial Intelligence are, we should all be very concerned with protecting our privacy, and be proactive about preventing the misuse and recognizing the limitations about what AI can do, and what we should be using it for. 



  1. I’m in Forensic Mental Health with Professor Burgess as well, and the same things that seemed to have caught your attention caught mine as well! Studying for the midterm the past couple days, I’ve been reminded of just how much you can determine about a criminal based on a crime scene. This could be a really cool application of AI, but as we discussed in class and as you talked about in your article, this kind of profiling can be pretty nasty and build in bias originating from humans.

    I was also actually the one who tweeted the AI gay-dar article as I thought it was super problematic. I’m really glad they did the study and released the results (without divulging the actual tech itself) because it is incredibly important to consider the ethical implications of AI while we emphasize what a potentially earth-shattering development it is.

  2. I think I really have to step up my Netflix game as I have yet to watch Minority Report after hearing about it in class over 10 times or Mindhunters, but I did read Annie’s tweet so bonus points? Anyways, I agree that using AI to detect or predict a persons sexual orientation or actions has some serious implications. Using AI to predict/diagnose these things completely removes any privacy or sense of agency these people may have. Similarly, while I agree criminals are bad and should be stopped, that doesn’t meant that I believe someone with 3 inch nose should be arrested just because they are likely to commit a crime. However, Professor Kane brought up an interesting use of AI that has been quite controversial, the social credit system in China. China has invested billions of dollars into millions of security cameras monitoring every public space. They leverage AI with these cameras to identify individuals in view and their actions and associate a score with their actions that represents their “good samaritan” score. This score is then used as a data point when these individuals apply for loans if they have no other credit history. Crazy right? Tracking every individuals actions in a country of over 1 billion people. That the power of AI!

  3. Jaclin Murphy · ·

    First off, thank you for giving me a new show to binge watch. I’m literally queuing up Netflix as a I type this. Second, when we discussed the “gay-dar” AI in class, my immediate thought was why? Why make something like this? What is the point, what is the gain? Other than a mediocre parlor trick? Are we just going to start scanning people’s faces and outing them without their consent. In what context would you NEED to know someone’s sexual orientation. It seems like a waste of brainpower and energy to create something like this. It’s scary to think that we have the power to do this. Also it’s not even accurate all the time! AI, no matter how much I discuss the great things it will do for us, leaves me with an unsettled feeling. The type of feeling like “ooh maybe we shouldn’t.” Great blog though!

    1. cgriffith418 · ·

      @jaclinmurphy1523 Such a good point. Your question of “why” reminds me of our conversation in class about using AI for the right things, rather than just to play Where’s Waldo and flex our AI-muscles. I would imagine there are is a ton of potential for positive use of AI in forensics, but as you said, it seems like in this day in age, with our more evolved views on sexual orientation, there is no reason for “gay-dar.” People certainly shouldn’t be able to use it on other people, and I can’t imagine any scenario where someone would want to use it on him/herself. This is definitely one of the AI mechanisms that makes me think “slippery slope.” Imagine if they started trying this for religion…oof!

    2. Hey Jaclin, I totally understand your point but I think “why?” is the wrong question to ask. So many amazing discoveries have accidentally come from people trying something and realizing something new. X-ray’s, microwave ovens, penicillin, and pacemakers were all accidental inventions that came about when people were trying to accomplish an entirely different goal. When we tried to go to the moon many people asked: “why?”. But countless innovations have been developed from the research done to accomplish space travel. Return on investment sometimes comes in surprising ways. The nature of innovation is often that you throw spaghetti against the wall and see what sticks. I’m not saying that this gay-dar is going to revolutionize AI, but I think that it’s a fascinating way to examine and test the applications of new technologies.

  4. jimhanrahan7 · ·

    First off, good for you for taking the “stimulating” course over another “practical” one. As someone who graduated college 10 years ago, I can tell you that you’ll remember the stimulating ones long after the finance minutiae has slipped away.
    In a world where – to your point – we judge entirely too much on looks already, are we just sending our biases into hyper-speed? How do we ensure that our government isn’t already installing this technology at airports? I have to imagine that if we’re not there now, we will be soon. The incentives are too great and the perceived cost of failure too high. The only solve I can imagine is voting. Making sure that those in power are held accountable for anything from walls to wiretapping is the only way we can reign in these creeping technological advances. I don’t believe we can stop the freight train of innovation, but we can have a say in how it’s used.

  5. What happened to not judging a book by its cover?? The idea of having AI be powerful enough to scan and provide assumptions based on someone’s look is insane to me. While there can definitely be specific cases where this would be useful, the whole idea of profiling leads me to believe that a whole lot of BAD would come out of this. In the wrong hands of anything powerful can definitely lead to disaster and I can’t imagine the amount of data that can be stolen and used maliciously. There have been too many times when I was younger that I made judgements based off of someone’s look and was completely wrong. Knowing that AI’s will continue to be better, there will still be that percentage that lacks the human element to making good judgements and identifying what is considered wrong or right. The biggest concern with this technology is what governments can potentially leverage and use to act against a specific race or group of people that they do not want to have rights or privileges. It’s also super cool that you’re in a class with Ann Burgess! I took her Victimology class when I was in undergrad and it was definitely out of my study ( I was in Economics). It was eye opening and a great way to discover something cool!

  6. dancreedon4 · ·

    MindHunter was a great series that opened my eyes about criminal profiling, loved the show! Props to you for challenging yourself in your Senior Spring semester, cannot say I did the same a few years back. As for AI and “gay-dar”, seems like this is extremely dangerous and controversial. Especially in some countries where it is illegal to be part of the LGBT community or other areas where it is not necessarily illegal but unsafe. If certain individuals get their hands on this AI tech, people could be targeted and even killed if “detected”, whether they truly are LGBT or not. Other issues from being judged via face by AI are also controversial, such as pre-committing a crime. Hopefully our elected officials can enact some regulations and laws before it’s too late, but I’m not holding out hope. Great Office Meme.

  7. kgcorrigan · ·

    This was a great post and I appreciate that you are continuing the discussion surrounding ethical issues and AI. I agree that this new type of profiling, enabled by AI, lends itself to a lot of potentially harmful implications, and personally feel that it is a huge invasion of privacy. Although it is amazing that we have been able to develop technology to this point, I think we need to be careful about not taking it too far, especially to a point where it could put innocent people in danger. I hope that as more advanced tech becomes engrained in our society there is regulation developed to make sure we are keeping the “creepy-cool” line in check, but I think only time will tell. On a separate note, I need to add Mindhunter and Minority Report to my watch list after hearing about them in this class!

  8. First off thank you for the great Netflix recommendations always looking for new binges. I agree with you we already live in a society that is critical of looks and quick judgment and this could just further maximize this. The AI usé to identify sexual orientation is something I find really concerning. You are correct this is a piece of information that a person chooses to share and should never be forced to give. While I do think there are many positive uses for AI such as helping criminal profiling, users such as a “gaydar” are misuses. As technology advances it is interesting to see the bias we are creating and something we should all be mindful of. We are all human with inherent biases, and technology is a human output so it’s important to have perspective.

  9. thekidbeats19 · ·

    I’ll reiterate my classmate’s comments by first saying thank you for adding another show to my backlog of Netflix shows, weekend plans cancelled. I commend you for taking the class discussion further on what could seen as a sensitive subject. Utilizing AI to identify if someone is gay seems to be to be a complete waste of investment and invasion of privacy. AI’s ability to construct predictive analytics based on facial features, or any physical features for that matter, to categorize a person for anything other than health reasons (identify disease, obesity, physical injury, etc.) I think is a step in the wrong direction. It only reaffirms unjustified prejudice, especially in America were diversity of facial and physical features is among the greatest in the world. Your reference to Minority Report is again spot-on, and makes me think about what could happen to a young African American man living in downtown Boston (me) if criminal intent was judged purely on my facial features and color of my skin based on a historical set of data applied to a current context. Hmm.

  10. This blog reminded me of the TedTalk I watched for my group last week titled, “AI Makes Human Morals More Important.” As AI continues to evolve into an increasingly powerful, predictive tool, our need for morals rises exponentially. We must be mindful about the potential forms of misuse and be proactive about ensuring that it is being used for a greater purpose. I would be lying if I said I wasn’t worried for our future… I genuinely worry that AI would contribute to a rise in unwarranted judgment, prejudice, and discrimination within our society.

%d bloggers like this: