News

Your face belongs to us

Session 6 - Kashmir Hill 2024

by Sam Leith @ OffGrid Sessions 2024

When the New York Times reporter Kashmir Hill first heard about Clearview AI, she was intrigued. Word had reached her that this company was able to produce completely accurate facial recognition of almost anyone. She knew that the systems currently in use by law enforcement agencies in the US were unreliable at best. So what was this company that had cracked it? How could a company nobody had ever heard of – rather than, say, Google or Meta – got this unimaginably powerful technology?

The inquiry that was to turn into her book Your Face Belongs To Us began with that most arcane technique of the investigative reporter: a Google search. The company had a website with just its name and a slogan: “Artificial intelligence for a better world.” That, and an address – which turned out to be just three blocks from the New York Times’s offices. So she walked acroAsss town – and discovered that the building didn’t exist. 

The mysteries piled up. She found the names of two investors in the company, one of whom was the Paypal/Palantir tycoon Peter Thiel. She knew his spokesman. She asked him about Clearview. He said he’d look into it – and never contacted her again. She searched LinkedIn and found only one person who claimed to work for Clearview. She reached out. Nothing. A small forest of red flags seemed to be sprouting.

Then she got a break. A detective in Florida called her up (she’d reached out to the Police departments that some “google dorking” had indicated were using Clearview products) raving about how brilliant the tech was: he said you could feed in a picture of a person in glasses, or a hat, or half turned away from the camera, and it would identify them — “you’d get hit after hit after hit”. He’d adopted it on a free trial but now the department was paying for it in preference to the state system, which “didn’t work”. 

Could she see it in action, she asked. Sure, he said – but since he would compromise an ongoing investigation if he showed her any of his own searches, they agreed she’d send some pictures of herself for him to run through the algorithm. She sent them. And immediately, he stopped answering her calls. The same happened again with another officer, who called her full of excitement about the tech, agreed to search her face…and then… ghosted her.

As she came to realise, the company may not have been answering her inquiries: but it was very aware of her. It had used its own technology to “put an alert on my face”. Anyone who ran Kashmir Hill through the database would get a call telling them to stop talking to her immediately.

“It told me two things: that they could see who law enforcement was searching for; and they could use it to flag people who were meddlesome. It only made me more interested. Why this little company? Why not Google or Meta?”

The truth was, though, that Google and Meta had got there first. The dream of computerised facial recognition dates to the early 1960s, when the CIA funded engineers to build an automated face recognition system. It didn’t work – and for years afterwards it was thought that this was a uniquely human cognitive skill.

But, in fact, the sheer computational power, and the vast datasets available (which we all handed over by tagging our photographs on Facebook and Instagram and LinkedIn), mean that computers can recognise faces – and for a decade or more now have been able to do so better than humans. In fact, Google has had the tech since 2011 and Facebook developed a version in 2016. But both companies – neither of which is exactly shy about invasion of privacy issues – held the technology back for more than a decade because they thought it was “just too scary”. What would an evil dictator do with such a tool?

And the tech lead with Clearview – whom Kashmir eventually managed to interview — was a goofy kid called Hoan Ton-That who had come to California chasing the tech bubble and whose previous main achievement had been an app to let you superimpose Donald Trump’s haircut on someone else’s head. As he told her, he had bolted together Clearview’s terrifying tech simply by searching the code base GitHub for “facial recognition”. “It’s as if I Googled ‘flying car’ and then built one!” he told her, laughing.

“What Clearview AI had done, then, was not make a technological breakthrough,” said Kashmir. “They had made an ethical breakthrough. It was ‘ethical arbitrage’.” They had done what other companies could do but declined to because its moral implications were so murky.

And someone was probably always going to. The engineers who worked on facial recognition just wanted to solve the problem and imagined that someone else would work out how to control it. “Technical sweetness”: Kashmir reminded us of Oppenheimer’s quote about the Manhattan project: “When you see something that is technically sweet, you just do it, and worry about what to do about it later.” 

When Kashmir first reported on Clearview they had a database of 3 billion images scraped from the public internet. At last count it was 40 billion. Why didn’t Meta sue them for taking images from Facebook? Cease and desist letters were sent cursorily, but ignored and not followed up on – because, Kashmir speculates, Google and Meta knew they’d soon want to scrape copyright material from the internet themselves for generative AI so didn’t want to establish too strong a precedent that it might be illegal or unethical.

The current state of play is mixed. Clearview is still being used by police in the US. Europe deemed the collection of the database illegal and banned it from doing business here. In the US there’s strong resistance to real-time scanning; in the UK police forces are even now trialling vans using AI to scan crowds for wanted criminals… and traffic offenders. 

But everywhere there’s the danger of what Kashmir calls “surveillance creep”: once one use-case is admitted, more will be found. The owner of Madison Square Garden used face recognition to ban lawyers working for law firms in litigation with him from his venues. Taylor Swift is said to have used it to comb her concert crowds for known stalkers.

A specially sinister case was when Clearview lent its tech to the Ukrainian army, theoretically so they could screen out Russian spies seeking to infiltrate their ranks. In practice, they used it to identify Russian corpses and send the photographs of these dead soldiers to their mothers, fathers, wives and girlfriends in the hopes of turning Russian opinion against the war.

And even if Clearview is still only available to law enforcement, the technology is out there. It’s easier and easier to come by. PimEyes is an image search company that theoretically exists so users can search for their own faces (you have a tickbox to promise, Scout’s Honour, that it’s you you’re searching for). A registered user can make 25 searches an hour – so that’s a lot of narcissism… or a lot of people using it to search for other faces. It works scarily well.

And Russia and China are already way ahead – their authoritarian states are using this tech to scan faces in real time, all the time. Yet as Kashmir says, there are some great uses for this: “I know so many people in law enforcement who really value this technology: it can help solve heinous crimes.” But it’s also vulnerable to “automation bias” (trusting fallible computers too much). You can be arrested for the crime of looking like someone else. “We need to build in the recognition that sometimes they go wrong.”

Above all, the question is “Who determines what happens with technology? Is it those companies, is it government, or is it us? What sort of world to we want to live in?” Kashmir says that there’s already talk of releasing the tech in such a way that you might have privacy settings for your face. 

She asks crowds when she talks who would let their face be searchable by anyone, who’d let it be searchable to friends and contacts, who’d insist it be entirely private. Results are mixed. But when she talks in California, almost everyone says they’d be happy to have their face be publicly searchable to anyone: “They’re very in favour. That is where they are building the tools… and that is the world they want.”

Three key takeaways

  1. There are some companies who make Google and Meta look like ethical paragons

  1. Once the technology exists, it’s already too late

  1. The guy who owns Madison Square Garden is a dick