Clearview AI Probably Has Your Face

“You can change your Social Security Number if it is taken without your consent, but you can’t change your face.” – Nathan Freed Wessler, ACLU senior staff attorney

I first met Hon Ton-That at a Christmas dinner I hosted for friends and family back in 2016. At various holiday and social gatherings in the years that followed, I got to know him as one of my oldest friend’s boyfriend, then fiancé, and finally husband. I never had much interest in his business dealings, I just knew he is the CEO and Founder of a tech start up, but then I came across a New York Times article about his company titled “The Secretive Company that Might End Privacy as We Know It” in January of last year1. Since that first NYT article, I have been unable to escape the news coverage detailing how law enforcement agencies have been using Clearview AI in secret, the company’s unscrupulous data sourcing and marketing tactics, the ACLU’s lawsuits against the company in Illinois, and the unclear legality of law enforcement using Clearview AI in the first place. But I’m getting ahead of myself…

Winter Market at Boston City Hall Plaza November of 2017 (friend, Hon, and myself)

So what is Clearview AI? Clearview AI2 is a company that produces facial recognition software primarily targeted for use by law enforcement entities. Law enforcement agencies have explored and utilized facial recognition software for decades. However, it is typically sourced by government pictures, such as mugshots and drivers licenses. As we’ve learned, AI is only as good as the data that gets fed into it, so this data sourcing constraint greatly limits the software’s’ abilities and thus its usability and accuracy. What sets Clearview AI apart is that its facial recognition data source is the largest known to date with 3+ billion facial images. While this is a great selling point for law enforcement agencies, how Clearview procured all those images is causing a lot of uproar.

Clearview AI’s facial images are “sourced from public-only web sources, including news media, mugshot websites, public social media, and other open sources.”2 Essentially, they comb the internet for any and every picture of a face they can find, regardless of if it violates a site’s terms of use. To date, Facebook, LinkedIn, Google, Venmo, and Twitter have all sent Clearview cease and desist letters, but Clearview continues to scrap data from their sites and none have taken additional legal action3,4. This presents itself as a huge issue in the state of Illinois, where the Biometric Privacy Law prohibits private companies from acquiring a person’s biometric data (finger prints, voiceprints, facial identifiers, retinal scans, etc.) without that person’s explicit consent5. For this reason, the ACLU has filed a lawsuit against Clearview for violating this law3,4. But that’s not the only reason Clearview is getting so much negative news coverage…

Clearview is also getting a lot of media attention because of how they initially went about marketing themselves to law enforcement agencies2. They took a “flood the market” approach starting in the summer of 2019 where they sent thousands of promotional emails to anyone with a government related email address. They offered free trials of their tool and encouraged recipients to use it on themselves, a family member, or a friend to test it out. In the vast majority of cases, the leaders of the organizations were not even aware that those reporting to them were utilizing this technology. By March 2020 Clearview AI put more checks on their system including requiring a supervisor’s approval and providing an active case number to run a search. Some agencies that are confirmed to have tested/used the software include the Dept. of Homeland Security, the FBI, and the NYPD.

The debate surrounding facial recognition is heated on both sides. Clearview’s use by law enforcement has been cited as assisting in apprehending child molesters and murderers4. But there is also a great deal of evidence stating that facial recognition systems in general are known to not be as accurate for POC and have led to wrongful arrests and convictions3,7; Although Clearview AI adamantly insists this isn’t the case with their program and no known cases of a misidentification with Clearview AI have ever been reported4. In a judicial system already plagued with systemic racism, many believe these AI based facial recognition programs must be above reproach with 100% accuracy being rigorously tested and proven prior to implementation. The Algorithmic Justice League, founded by Joy Buolamwini of the MIT Media Lab, is one organization shedding light on how faulty AI poses a threat to civil rights and democracy, which is highlighted in the documentary Coded Bias by Shalini Kantayya7.

https://www.ajl.org/spotlight-documentary-coded-bias

Ultimately, I believe more structures need to be put in place guiding and limiting the use of facial recognition software. Some places like San Francisco have banned the use of facial recognition by local law enforcement; However, corporations and state/federal governments are not beholden to that restriction6. Overall, there is a dire lack of legislation providing guidance and oversight on facial recognition’s use. For example, even though Clearview AI markets itself primarily to law enforcement, before being outed by the NYT in January 2020 companies such as Macy’s, Walmart, Bank of America, and Kohl’s are known to have used Clearview’s technology4,6. Additionally, without appropriate oversight mechanisms in place controlling how the technology is used, what is to stop someone from using it to – for lack of better words – be a creep or a stalker? While I believe it’s ok for the History Channel to use Clearview AI to debunk the belief that Alec Baldwin is the reincarnation of a confederate soldier8, who am I to decide? Mr. Baldwin might be gravely offended.

https://play.history.com/shows/the-proof-is-out-there/season-1/episode-9
Go to minute 16:00 to see Clearview’s segment

If you’re interested in learning more about facial recognition, Clearview AI, and the fuzzy legal circumstances surrounding the use of facial recognition technology, I HIGHLY recommend watching this episode of Last Week Tonight with John Oliver:

Minute 12:30 through to the end discusses Clearview AI

Sources:

  1. New York Times, The Secretive Company that Might End Privacy as We Know It
  2. Clearview AI, Website Overview Page
  3. Buzzfeed, Surveillance Nation
  4. New York Times, Your Face is Not Your Own
  5. Biometric Privacy Developments
  6. Facial Recognition: Last Week Tonight with John Oliver
  7. Algorithmic Justice League, Coded Bias
  8. History Channel, Flying Orbs and Freakish Fish

9 comments

  1. ritellryan · ·

    This was absolutely wild to read (and the John Oliver segment in its entirety was great too). The moral dilemma that comes with this (stop bad criminals vs give up privacy) is something that has been going on for years, and quite frankly will only get amplified as this technology becomes more advanced. I know we have discussed most of the positives and negatives in class, but I think it is telling when Facebook has the moral high ground and public support in an argument.

    Also, the “Face Finder” app just runs past the creepy/cool line to the end of the continuum. The risk of harm is just so high relative to any benefit.

  2. What an interesting blog! I had no idea there was a company out there that had that many facial images in its database and was shopping it to law enforcement. With the idea that my face is probably included in that data without my consent, I am both concerned and not happy with that. If the images were taken with my consent or legally through government records, I would not have a problem with it. I have a much bigger problem if my data is being taken without me knowing. I hope in the future there are more regulations put in place to limit this from becoming commonplace.

  3. lourdessanfeliu · ·

    Very nice blog post! Same as Michael, I had not idea a company created an AI program and marketed it specifically for law enforcement. Companies like Facebook, Twitter, etc should follow though and protect the data. The video was very enlightening, crazy to think Clearview has a database with more thank 3 billion images!!! very very creepy and concerning in my opinion.

  4. changliu0601 · ·

    Interesting post!!I once read a news that the Federal authorities arrested a suspect after they used facial recognition programs to find the image of suspect from his girlfriend’s instagram.A lot of states and government probes this technology.But the law also requires that facial recognition systems be studied to better understand their capabilities, as well as concerns about privacy and racial profiling.

  5. I’m a huge fan of John Oliver (although I do sometimes think the “top story” is often a bit over the top…but not in this case). As facial recognition gets more widespread, I do expect counter technology to emerge. Maybe just as simple as people continuing to wear masks in public, but I’m sure it can get even more complex.

  6. alexcarey94 · ·

    I think this is the classic case that originally seems creepy that will eventually be mainstream. For example today cameras being everywhere outside stores on traffic lights people were originally probably concerned about the privacy around these being implemented. Today this is mainstream- similar to the tech to scan your body before going on an airplane. I think this concept seems creepy but also will eventually take off and not seem as crazy when we see it used in practice.

  7. courtneymba · ·

    Awesome post and enjoyed the class discussion on this too. This does feel like a manifestation of the creepy/cool concept. Just because you can… should you? Here’s that podcast through Radiolab on a similar eye in thy sky. You can tell in this interview how conflicted and guarded the owner of the technology is with the applications. Clearview… maybe not so much. https://www.wnycstudios.org/podcasts/radiolab/articles/eye-sky

  8. Andrae Allen · ·

    Hey Lisa you just delivered another standup post.
    I wonder how I would look in Ton-That’s Trump Hair App (Minute 16:09). Ohhhh wait hears a thought! What if the Trump Hair app is just another avenue that Clearview AI uses to collect faces. I couldn’t include an image of the Trump Hair app in my comment so i posted my custom image here -> https://i.ibb.co/t3V16cg/trump-hair.png

  9. This is an awesome blog post about AI from the perspective of privacy and security. I didn’t realize that AI would threaten our privacy until I saw the post.. We have carelessly left tons of our private information on the social platforms including our pictures and videos we posted to the social platforms. It’s creepy that if all of the information is used by companies without even letting us know. This is also a great example showing how legacy of government regulations and policies on technology nowadays. I still remember “people are REMMs” from our economic class last year. The reason they did this is that the cost of doing so is so small that can be neglected. Government and regulators really need to catch up.

%d bloggers like this: