- Clearwater AI, a facial-recognition startup that scraped social media for images, has been adopted by at least 600 law-enforcement agencies, according to a New York Times report.
- The software developers relied on current and former Republican officials to sell the software to law-enforcement agencies.
- The agencies reportedly have little information about the origin of Clearwater AI, which likely violated policies of sites like Facebook, Twitter, Instagram, and YouTube to create its database of billions of photos.
- There has been growing concern among law enforcement’s use of facial-recognition technologies, particularly over fears the tools have a racial bias.
A facial-recognition startup is being used by hundreds of law enforcement agencies in the US to solve crimes, but little is known about the software, particularly among the law enforcement community, according to a Saturday report.
Per The New York Times, the software – Clearview AI – is a collaboration between Hoan Ton-That, an Australian native who moved to the US in 2007, and Richard Schwartz, a former aide to former New York City Mayor Rudy Giuliani.
The two first crossed paths at the Manhattan Institute, a conservative think tank, The Times reported.
Kirenaga Partners, a small New York-based private equity firm, is also an early investor in Clearview AI, which has also received funding from venture capitalist Peter Thiel, per the report. Thiel – Facebook’s first large outside investor, per CNN – still serves on Facebook’s board of directors.
When a user uploads a photo to the application, which has been used by more than 600 law-enforcement agencies, Clearview AI scans for matches across its catalogue of billions of photos it scraped from social media websites, typically in violation of those sites’ terms of service, according to The Times. It then shows the results to whomever made the search.
Clearview did not share which law-enforcement agencies have used its tool. In addition to the hundreds of law-enforcement agencies, the company has also licensed its software to private companies, The Times reported. But according to the report, both local and national law enforcement agencies confirmed they had used the software to help solve crimes ranging from shoplifting to murder. The law-enforcement agencies said they had little knowledge of who had developed Clearview AI, or how the software worked, according to the report.
Facebook says it’s investigating Clearview AI after The Times’ report
The app’s founders reportedly began marketing the service to law-enforcement agencies for as little as $2,000, The Times said. The founders reportedly relied on former and current Republican officials to approach law-enforcement agencies about using the low-priced service, or in some cases, a free trial of the software.
When analyzed by The Times, underlying code in the application also revealed the software had been designed to work with augmented reality technology, meaning someone wearing special goggles or glasses could potentially use Clearview AI to instantly determine details, including a person’s identity and address. Ton-That told The Times his company was developing the augmented reality technology as a prototype, but it had no plans to release it publicly.
In a statement, a Facebook spokesperson told Business Insider the company was investigating Clearview AI following the report.
“Scraping Facebook information or adding it to a directory are prohibited by our policies, so we are reviewing the claims about this company and will take appropriate action if we find they are violating our rules,” the spokesperson said.
The Facebook spokesperson would not comment on Thiel’s investment in the startup, though he pointed Business Insider toward a statement from Thiel’s spokesperson given to The Times.
“In 2017, Peter gave a talented young founder $200,000, which two years later converted to equity in Clearview AI,” Jeremiah Hall, Thiel’s spokesman said, per The Times. “That was Peter’s only contribution; he is not involved in the company.”
As The Times noted, Thiel attained fame for bankrolling Hulk Hogan’s lawsuit that bankrupted Gawker. Thiel and Ton-That had both received negative coverage on Gawker, per the Times report.
Spokespeople for other social media platforms reportedly used by Clearview AI, like Twitter, YouTube, Instagram, and Venmo confirmed to The Times that scraping for images on its sites violated company policies. Twitter went a step further, telling The Times that it was against policy to use images from its platform for facial recognition.
When The Times asked a law-enforcement agency to run its own reporter’s photo through the software, representatives for the Clearview AI, who had previously ignored her requests for information, contacted the law enforcement agencies to ask if they’d been speaking with the media.
The Times concluded the company had been keeping tabs on the reporter, Kashmir Hill, and that Clearview AI was able to see what law enforcement was searching for and when they were searching for it.
Concerns over facial recognition technology have long been centered around privacy issues and claims of racial bias in the technology. In December 2019, a study released by the National Institute of Standards and Technology found facial recognition technology had a racial-bias, typically having a a more difficult time identifying non-white people and women.
On Wednesday, Rep. Alexandria Ocasio-Cortez, a Democrat from New York, expressed fears about facial-recognition technology during a meeting of the House Oversight and Reform Committee.
“This is some real life Black Mirror stuff that we’re seeing here,” Ocasio-Cortez said, in reference to the hit British Netflix series that often develops into the dystopian aspects of technology.
Despite concerns over the technology, law-enforcement agencies around the US have reportedly adopted such controversial technology, albeit the use is not often publicized due to the nature of police investigations, Business Insider previously reported.
But in addition to the typical risks and controversies associated with the use of facial-recognition technologies, Clearview AI presents new risks as potentially sensitive images are shared by law enforcement with the software with little knowledge of how the company handles its data.
As The Times noted, police agencies have had access to facial-recognition technology for decades, though tools like Clearview AI don’t limit searches to government databases, which has long been a limitation for law enforcement’s facial-recognition software.
Clearview AI said its software was effective about 75% of the time, though, as The Times noted, it’s impossible to determine how many false results the service provides, as it has not been tested by a third party.