Replace Your Face With an A.I. Twin to Trick Facial Recognition

Photo of author
Written By Thomas Smith

The New York Times called Thomas Smith a "veteran programmer." For over a decade, Smith has led Gado Images, an AI-driven visual content company.

If you’ve posted a photo of yourself online in the past few years, there’s a good chance Clearview AI has slurped it up and added it to the company’s massive facial recognition database of more than 3.1 billion images. The New York Times said that Clearview could “end privacy as we know it.” In January, I got my hands on my own Clearview AI profile, and its contents freaked me out.

A wide variety of legal and legislative challenges have been mounted against Clearview. My own article was cited in the American Civil Liberties Union’s landmark class-action lawsuit against the company. But even if lawmakers rein in Clearview AI, stopping facial recognition online is like playing whack-a-mole: Another clone of the company will simply appear in its place.

This Is the Ad Clearview AI Used to Sell Your Face to Police
Emails obtained by OneZero reveal the controversial company’s marketing to law enforcementonezero.medium.com

Given the ubiquity of facial recognition online, what can consumers do to keep their faces out of massive surveillance databases? Synthetic Content startup Generated Media thinks it has a solution: Replace your face with a technologically advanced fake created by a neural network. The fake gives people a sense for your appearance while keeping your actual face anonymous.

Generated Media specializes in creating artificial faces. To do this, it uses a technology called generative adversarial networks (GANs), which pit two warring neural networks against each other. In Generated Media’s case, as the networks duke it out, one network (called the generative network) gets better and better at generating artificial faces. GANs technology is so good that it can produce fake faces that easily fool humans into thinking they’re real. The company made a splash in a recent New York Times article about the tech behind fake faces.

GANs can also perform this feat on a massive scale. Generated Media is less than a year old and has already created more than 2 million fake faces. The faces represent people of every possible age and appearance. They look completely real, but the people they depict don’t actually exist and never have. Generated Media licenses them as stock photos on its website Generated.Photos and uses them as training data to reduce bias in other A.I. systems.

Given the ubiquity of facial recognition online, what can consumers do to keep their faces out of massive surveillance databases?

With its massive database of images, Generated Media is now turning its attention to consumers. With a tool launched today called Anonymizer, the company allows you to upload a real photo of your face and receive a variety of fake faces that look similar to your own. You can then use the fakes instead of your real face on social media or anywhere else that you need to post a photo on the public internet. The fake photos are free for personal use and come with an option to use a transparent background.

The fake photos, Generated Media says, should look enough like you that contacts will find them believable. But because they’re not actually you, if Clearview or another facial recognition company adds your fake face to its database, it won’t be able to use the fake face to find the real you. Generated Media says users can swap out their photos for new fakes “at least every day” for an extra measure of anonymity.

In an interview, Tyler Lastovich, head of strategy for Generated Media, told me that the company built the Anonymizer after he was “personally approached on LinkedIn by someone using a Generated photo as their avatar” and after the company saw “many, many more [of its photos used as profile images] on Twitter.” Others have attempted to create tools that use A.I. to obscure the identities of protestors and activists online by subtly altering their profile photos. But Clearview told the New York Times that these tools do not work to trick its systems. A totally fake photo like the ones created by Generated Media would likely be a safer solution.

I tested out the Anonymizer by uploading a photo of my face, taken at a fancy pizza place in San Ramon, California.

Photo courtesy of the author

Generated Media’s Anonymizer presented me with a grid of about 20 look-alikes, with the option to view more. I scrolled through and chose a fake face that looked the most similar to my own.

Image courtesy of Generated Media

Here we are side by side. Because the fake face had a transparent background, I was able to superimpose it over a photo I took at the same fancy pizza place where my real photo was taken.

Do we look exactly the same? No. For one thing, I would never wear my hair like that. But we look close enough that contacts beyond my close friends and family might mistake the fake for me — especially if I used the fake on a social media site like Twitter, where profile photos are a scant 49×49 pixels.

A computer might make the same mistake, too. To test how similar my A.I. doppelgänger and I look, I compared our faces using a face comparison API from Face++, a ubiquitous provider of facial recognition software. Face++ returned a “normal probability” that my face matched my A.I. clone’s and estimated with 64% certainty that we were the same person.

That’s not perfect, but it’s not bad for a fake face imagined by a computer, and results should improve over time. The Anonymizer works by analyzing a user’s face and finding the closest match from within Generated Media’s existing database of fake faces. As the company generates more fake faces, the chances of finding a highly believable match will increase.

In my testing, I found that the more recognizable a face is, the harder it becomes to find a convincing fake. To test this, I uploaded a photo of Donald Trump to the Anonymizer. The results looked totally different from his actual appearance. For better or worse, I’ve almost certainly seen more photos of Trump’s face over the past four years than I’ve seen of my own. When you compare an extremely familiar real face like Trump’s with a fake, the little flaws in the fake are glaringly obvious. For less familiar faces, they’re easier to overlook, and the fake appears more convincing.

This suggests another use case for the system. The Anonymizer’s killer app may not necessarily be creating face clones that your contacts will believe are actually you. Rather, the system may be best for situations where you want to give a sense of your appearance to a person you’ve never met before, while keeping your actual appearance anonymous.

Dating profiles, Generated Media says, are a prime example. If you’re creating an online dating profile, you can grab a fake image from Generated Media’s Anonymizer and use it in place of your real face. The image would give a good sense of your appearance — if you met someone special and later chose to reveal your real face, they hopefully wouldn’t feel catfished. But until you chose to reveal the real you, the fake face would prevent the cyberstalkers who frequent dating sites from knowing your exact appearance and targeting you IRL.

In a similar use case, the company has worked with investigative journalists, using a related tool to create fake faces for sources who wish to remain anonymous. The fake face can give a journalist (and their readers) a sense of the source’s age, skin color, hair length, and other key elements of their appearance, while ensuring that their real identity remains protected. (Journalists who use this tactic to protect their sources should acknowledge in their story that the face is a fake and is being used to protect a sensitive identity.) Clearview AI downloads millions of images from news articles and social media sites, so including fake faces in sensitive articles is a good way to prevent Clearview from indexing a source’s face and linking it back to a sensitive article.

If you’re planning to post about sensitive topics like a protest, you might consider taking a similar approach by creating a separate social media account and populating it with a fake version of your face. That way, sensitive content won’t be linked back to your real face and won’t be available to those who might search for your face on a platform like Clearview’s. Always check the terms of service of your social network before uploading a fake face; Twitter, for example, does not allow fake faces if they’re used for deceptive purposes. If you’re using the fake to protect your right to unimpeded free speech, you’re likely fine.

Not all privacy advocates are convinced, though, that systems like the Anonymizer will prove effective. Activist and Harvard Kennedy School Shorenstein Center research fellow Chris Gilliard told me, “I’d be pretty wary of any claims that this can function as a way to protect people’s anonymity.” Data sharing goes well beyond public photos posted to the Internet, and Gillard told me that “part of the problem with the lack of regulation and oversight with our data and with so many of these digital tools is that it’s relatively easy to obtain photos of people, whether that means scraping social media or buying it from the DMV.” Gillard felt that these are social rather than technical problems, and not something society can “tech” its way out of.

Fake faces also come with risks. According to the New York Times, fake faces can be “used as masks by real people with nefarious intent: spies who don an attractive face in an effort to infiltrate the intelligence community; right-wing propagandists who hide behind fake profiles, photo and all; online harassers who troll their targets with a friendly visage.” Earlier this year, a fake face was used in this way to share a story about Hunter Biden from a fake intelligence firm.

In the hands of a nefarious user, the Anonymizer could potentially create these kinds of fakes. But for those who are looking to create a fake face for negative reasons, there are likely easier methods. The website ThisPersonDoesNotExist.com generates fake faces in your browser using a GANs technology that is similar to Generated Media’s tech, without the need to upload an image.

And because the code behind face-generating GANs is widely available, most sophisticated criminals would likely build their own face-generation systems, rather than relying on faces that could be traced back to a particular company — and thus potentially to them. For its part, Generated Media says users may not “use your image to impersonate another person or to conduct illegal activity.”

If you’d like to post a representation of yourself online without worrying about mass surveillance—or if you just want to test how well your contacts know you by swapping your Twitter profile pic for a fake and seeing if anyone notices—check out the Anonymizer. At the very least, it’s fun to see a grid of your own personal fake clones. And if you’re posting sensitive content online, worried you could be targeted for stalking, or just want to control who knows your true appearance, the Anonymizer isn’t just a plaything — it’s a potentially powerful tool.

Leave a Comment

Receive the latest articles in your inbox

Sign up for Gado Images' free newsletter

If you make a purchase from a link on our site, we may receive a commission at no cost to you, which helps to support our reporting. Thanks!