Tuesday 19 October 2021

"The worst facial recognition company in the world." Surveillance technology vendors quietly complained about Clearview AI at a top surveillance conference recently

A Clearview AI booth is shown at the Connect:ID surveillance technology conference in Washington DC in October 2021.
A Clearview AI booth is shown at the Connect:ID surveillance technology conference in DC in October 2021.
  • When Clearview came up in conversation, people's expressions would grow serious, or irritated.
  • The industry is upset with the company for giving facial recognition technology a bad name.
  • CEO says there should be a reasonable expectation of privacy at events that journalists might attend.

When I attended the surveillance technology conference Connect:ID recently, the elephant in the room was Clearview AI, a company that matches search images with pictures scraped from the internet.

I overheard multiple conversations that referred to Clearview with skepticism, at best. Several industry professionals openly expressed not liking or respecting the company. One government contractor said Clearview was "creepy." He told me he'd read about the company's extensive ties to the far right, and was alarmed by that.

In one discussion, an attendee called the company "the worst facial recognition company in the world." When I caught up with him - Jeremy Grant from the law firm Venable LLP - he said he was dismayed to discover he would be talking on a panel with Clearview CEO Hoan Ton-That. Minutes before it started, Grant was told Ton-That had bailed, for an unknown reason.

There's a fundamental difference between Clearview and most facial recognition providers, and the sense I got from this conference is that the industry is upset with the company for giving the technology a bad name. Clearview matches search images with billions of pictures it scraped from social media. Other facial recognition vendors, such as NEC and Panasonic, usually match search images with faces in a database provided by a client. This is often something like a database of mugshots or worker ID photos. (Many companies train their algorithms on photos from the internet, though).

Throughout the conference, when Clearview came up in conversation, people's expressions would grow serious, or irritated. "Aren't they the people who scraped all those photos?" one attendee said. There was a level of awareness - even among an industry that critics allege violates people's privacy - that Clearview took an exceptional step by collecting photos of people posted on social media.

While some conference booths for different facial-recognition vendors were often bustling, Clearview's setup was comparatively quiet. The company's Q&A roundtable event, however, got heavy attendance. Ton-That and Clearview's new VP of federal sales Matt Jones fielded questions. One man from the Bureau of Diplomatic Security under the Department of State asked how the company responded to the controversy that erupted after the public learned it scraped billions of images from the internet. Ton-That said it only scraped photos that were already public.

The same man later asked whether Clearview has any way to report false positives, or provide feedback on results. Ton-That dodged the question, while Jones said "we always encourage feedback." But in essence, no, you can't report false positives to Clearview.

"But really there's no such thing as a false positive, because it's not an affirmative match?" Clearview's new PR rep, Josh Zecher, said to Ton-That from across the table. Ton-That then noted that Clearview search results are "investigative leads" and not matches.

This is an important distinction because "match" implies that Clearview delivers a definitive search result, while "investigative lead" suggests the technology is just a starting point or one data point for broader probes by law enforcement and other authorities. This likely protects Clearview AI from liability in the case of an incorrect search result.

However, Ton-That frequently slipped from the PR-approved language when touting the technology. "No one has done what we do at this scale with this accuracy," he said later in the conference, adding that anyone "is able to solve cases instantaneously if they get matches in the system."

Clearview AI CEO Hoan Ton-That (center) is shown seated at a table during the Connect:ID surveillance technology conference in Washington DC in October 2021.
Clearview AI CEO Hoan Ton-That (center) is shown seated at a table during the Connect:ID conference in DC in October 2021.

Ton-That also wouldn't say whether Clearview was respecting the cease and desists the company received from Facebook, Twitter, LinkedIn, and other social media sites after scraping their photos. When I asked how often Clearview has complied with requests to remove images from its database, which it's claimed to do for more than a year, he just smiled and said, "We can talk about that later."

A different man asked how accurate Clearview is, even if the input photo has a strange angle or poor lighting. Ton-That said Clearview did "well." He said that its database now has 10 billion photos, up from three billion, and it's only getting better. But he repeated an accuracy figure - 98.6% accurate per 1 million faces - that's been used since 2019.

"People are constantly dumping their - it's just a constant," Clearview's Jones said during the roundtable event, referring to the steady stream of people posting their photos online, only for Clearview to scrape them.

In a separate event this week, Ton-That interviewed Joshua Findley, a special Agent for Homeland Security Investigations under ICE who focuses on crimes against children. They discussed the case of a man who was sentenced to 35 years in prison "for repeatedly sexually assaulting a child," per a DOJ press release. See the Clearview AI CEO's presentation here.

Findley said he'd been searching for the suspect, whose face was in 3 frames of an abuse video, but wasn't having success. After he sent the pictures to an HSI investigator who had access to Clearview AI, he said, the investigator found the suspect in the background of other photos taken at a bodybuilding expo. Soon after, HSI was able to find the man's name, get a search warrant under probable cause, and seize the man's laptop and cameras with incriminating material, Findley said.

Clearview has frequently mentioned this as an example of its technology being used for good. The case is remarkable, but it's unclear how representative it is. In 2019, Clearview claimed it helped catch a terrorism suspect, but the company simply searched for the suspect and sent the results to the NYPD after it had already arrested the man.

Findley said HSI has used Clearview on "prolific sextortion artists" who coerce minors into sending them sexually explicit images. This has been a huge problem, especially for young girls, for decades. "That's something I never knew happened," Ton-That said.

This week, Ton-That participated in a panel event on privacy put on by the Federalist Society, which was live streamed. The CEO said there should be a reasonable expectation of privacy at events that journalists might attend.

"I think it's quite different in terms of expectations of privacy when you're looking up something on Google Maps, when you're doing a search for something on Google, you expect it not to be private," he said. "As opposed to when you're at an event like this one, someone takes a photo of you and perhaps they write an article about it. There's a completely different expectation of privacy there."

Read the original article on Business Insider


from Business Insider https://ift.tt/3AYMjH0

No comments:

Post a Comment

This one career mistake may be costing you $300,000 in retirement savings. Here's how to avoid it.

Red arrow shows dollar's decline index A Vanguard study found that 55% of job switchers reduced their 401(k) contributions. Passi...