On February 28, 2020, a proposed class-action suit was filed in California federal court against facial recognition company, Clearview AI. This marks the fourth lawsuit filed this year against the company. Clearview AI’s biometric technology provides image matches of uploaded photos of any person, along with links to its web origin. The complaints cite violations of the California Consumer Privacy Act of 2018 (CCPA) and the Illinois Biometric Information Act (BIPA). Specifically, Clearview AI faces allegations of using face scan data without express consent.
What is Clearview AI?
Clearview AI was founded in 2016 by tech entrepreneur Hoan Ton-That. Since its founding, the company has amassed a database of over three billion photos scraped from websites and social media. In comparison, this is roughly seven times larger than the FBI’s own photo database of 411 million.
The software company was largely unknown until just a few weeks ago. It was able to remain under the radar by advertising directly to law enforcement and security professionals. However, this changed following a New York Times exposé in January, 2020 brought its capabilities to the public’s attention. The exposé prompted fierce and widespread concern for personal privacy protection.
Demand For Facial Recognition Across Private and Public Sectors
The company previously claimed its target audience was solely law enforcement agencies. But a leak of its clients on February 27 revealed Clearview has expanded into markets far beyond law enforcement. According to Clearview’s customer records the list of customers spans both the private and public sectors, including retailers Macy’s and Best Buy, the FBI, universities, the Ministry of Defence, and even J.K. Rowling’s charity, among many others. This is in addition to over 2,200 law enforcement agencies, both domestic and international.
Recent reports have also uncovered Clearview’s tests of their facial recognition software with surveillance cameras and AR glasses.
Class Action Lawsuit Allegations
The pending class action lawsuits against Clearview AI have been filed in Illinois, New York, and California. Despite geographical differences, they raise similar alarm at the threats that Clearview’s technology poses to individual privacy. The most recent suit alleges that Clearview knowingly violated CCPA and BIPA through, “its use of technology to collect, generate, and sell consumers’ biometric information without their consent.”
The suits also highlight personal safety concerns that could arise in the event that the software is leveraged without official oversight. The February 13, 2020 class action complaint filed in New York federal court directly references the New York Times exposé. It includes a quote from Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University: “The weaponization possibilities of this are endless. Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.” The complaint further labels Clearview’s technology as “Orwellian” and a novel danger to the future of private citizens’ security.
Further Clearview AI Outrage
Beyond the legal response, both government officials and tech leaders have voiced objections to Clearview’s endeavors. Senator Edward Markey of Massachusetts authored an open letter to Clearview AI CEO, Hoan Ton-That, on January 23 demanding answers on a number of issues he called “particularly chilling”. Senator Markey called for a full list of agencies currently using the software. This list was officially leaked just weeks later. He also asked for details of employee access to private information, and whether Clearview had used its technology on children.
Major tech companies have also made their thoughts on Clearview’s technology known. In January, Twitter sent Clearview a cease-and-desist letter, claiming its policies were violated. Twitter demanding Clearview AI delete any data collected from its platform. Soon after, LinkedIn and Google sent their own cease-and-desists with similar policy claims. Facebook also released a statement demanding Clearview stop using image data lifted from their user profiles.
In a more forceful response, Apple suspended Clearview’s developer account on February 28. Apple explained that the company had violated its Enterprise Developer Program terms of service. This has essentially disabled the iOS version of the Clearview AI app, however, its Andriod and desktop versions remain active. Apple has given the company 14 days to comply or risk permanent revocation of the account.
How the Experts Can Weigh in
Whether the class action lawsuits against Clearview AI will settle or go to trial is still unknown. In either instance, the ensuing legal battles to protect the modern concept of anonymity for private citizens will demand high-tech expert opinions.
A software engineering expert witness is important for establishing an understanding of the technology underlying Clearview AI’s product. Expert testimony from software engineers will be central to laying out the mechanics of how Clearview built its image database and developed its biometric system.
A law enforcement expert witness will be key for understanding law enforcement agencies’ standard practices with facial recognition technology. Law enforcement experts can also opine on the appropriate use of this type of technology and the implications of misuse on private citizens.
A privacy expert witness can add another angle to the possible consequences of widespread use of facial recognition software. This expert can also speak to the ethical and legal complications which arise from access to biometric technology. They can speak with authority on the specific violations of CCPA and BIPA.