“You can change your Social Security Number if it is taken without your consent, but you can’t change your face.” – Nathan Freed Wessler, ACLU senior staff attorney
I first met Hon Ton-That at a Christmas dinner I hosted for friends and family back in 2016. At various holiday and social gatherings in the years that followed, I got to know him as one of my oldest friend’s boyfriend, then fiancé, and finally husband. I never had much interest in his business dealings, I just knew he is the CEO and Founder of a tech start up, but then I came across a New York Times article about his company titled “The Secretive Company that Might End Privacy as We Know It” in January of last year1. Since that first NYT article, I have been unable to escape the news coverage detailing how law enforcement agencies have been using Clearview AI in secret, the company’s unscrupulous data sourcing and marketing tactics, the ACLU’s lawsuits against the company in Illinois, and the unclear legality of law enforcement using Clearview AI in the first place. But I’m getting ahead of myself…
So what is Clearview AI? Clearview AI2 is a company that produces facial recognition software primarily targeted for use by law enforcement entities. Law enforcement agencies have explored and utilized facial recognition software for decades. However, it is typically sourced by government pictures, such as mugshots and drivers licenses. As we’ve learned, AI is only as good as the data that gets fed into it, so this data sourcing constraint greatly limits the software’s’ abilities and thus its usability and accuracy. What sets Clearview AI apart is that its facial recognition data source is the largest known to date with 3+ billion facial images. While this is a great selling point for law enforcement agencies, how Clearview procured all those images is causing a lot of uproar.
Clearview is also getting a lot of media attention because of how they initially went about marketing themselves to law enforcement agencies2. They took a “flood the market” approach starting in the summer of 2019 where they sent thousands of promotional emails to anyone with a government related email address. They offered free trials of their tool and encouraged recipients to use it on themselves, a family member, or a friend to test it out. In the vast majority of cases, the leaders of the organizations were not even aware that those reporting to them were utilizing this technology. By March 2020 Clearview AI put more checks on their system including requiring a supervisor’s approval and providing an active case number to run a search. Some agencies that are confirmed to have tested/used the software include the Dept. of Homeland Security, the FBI, and the NYPD.
The debate surrounding facial recognition is heated on both sides. Clearview’s use by law enforcement has been cited as assisting in apprehending child molesters and murderers4. But there is also a great deal of evidence stating that facial recognition systems in general are known to not be as accurate for POC and have led to wrongful arrests and convictions3,7; Although Clearview AI adamantly insists this isn’t the case with their program and no known cases of a misidentification with Clearview AI have ever been reported4. In a judicial system already plagued with systemic racism, many believe these AI based facial recognition programs must be above reproach with 100% accuracy being rigorously tested and proven prior to implementation. The Algorithmic Justice League, founded by Joy Buolamwini of the MIT Media Lab, is one organization shedding light on how faulty AI poses a threat to civil rights and democracy, which is highlighted in the documentary Coded Bias by Shalini Kantayya7.
Ultimately, I believe more structures need to be put in place guiding and limiting the use of facial recognition software. Some places like San Francisco have banned the use of facial recognition by local law enforcement; However, corporations and state/federal governments are not beholden to that restriction6. Overall, there is a dire lack of legislation providing guidance and oversight on facial recognition’s use. For example, even though Clearview AI markets itself primarily to law enforcement, before being outed by the NYT in January 2020 companies such as Macy’s, Walmart, Bank of America, and Kohl’s are known to have used Clearview’s technology4,6. Additionally, without appropriate oversight mechanisms in place controlling how the technology is used, what is to stop someone from using it to – for lack of better words – be a creep or a stalker? While I believe it’s ok for the History Channel to use Clearview AI to debunk the belief that Alec Baldwin is the reincarnation of a confederate soldier8, who am I to decide? Mr. Baldwin might be gravely offended.
If you’re interested in learning more about facial recognition, Clearview AI, and the fuzzy legal circumstances surrounding the use of facial recognition technology, I HIGHLY recommend watching this episode of Last Week Tonight with John Oliver:
- New York Times, The Secretive Company that Might End Privacy as We Know It
- Clearview AI, Website Overview Page
- Buzzfeed, Surveillance Nation
- New York Times, Your Face is Not Your Own
- Biometric Privacy Developments
- Facial Recognition: Last Week Tonight with John Oliver
- Algorithmic Justice League, Coded Bias
- History Channel, Flying Orbs and Freakish Fish