Deepfake Tech Can Now Anonymize Your Face to Protect Privacy

Deepfake videos have demonstrated their applications in entertainment—both acceptably and controversially—but these general adversarial networks (GANs) still have a long way to go before they offer convincing results. This has led to a lack of practical applications and plenty of paranoia, but we’re beginning to see efforts to employ deepfake technology in ways that can help people protect themselves. A recent paper published at the International Symposium on Visual Computing demonstrates how deepfakes could help protect the right to privacy before they become a tool used to cause harm.
The paper utilizes face-swapping to anonymize the speaker’s appearance. Although the authors were not the first to consider this application, initial work simply transplanted expressions onto an existing face that consented to the swap. This new method, instead, replaces someone’s existing face with a uniquely generated one from a data set of 1.5 million face images. In theory, the new face won’t match any face in reality.

While the GAN produces suitable results for photos, it still struggles with replacing faces in video. This is likely because the network has to generate a “new” face for each frame. Maintaining consistency for a non-existent face isn’t an easy task in theory or in practice.

For the purposes of anonymizing a subject in a video, however, a glitchy look doesn’t matter too much. After all, the purposes of this GAN isn’t to fool anyone but rather obscure a person’s face without losing their expression. By blocking out a person’s face with a box (as seen on the left side of the GIF above), we can’t identify them but we also know very little about what they’re attempting to communicate.
In circumstances where anonymity is vital but expression can make a difference, such as anonymizing the appearance of sources in the news or documentary films that could put a subject at risk by revealing their identity, this method could be very useful and employed today. Its only notable issues include glitches that occur in poor lighting conditions or when the subject makes significant movements. Further work will likely resolve these problems in the coming years, but applying this method for its expected purposes won’t easily encounter these drawbacks. After all, interview subjects typically do not make any significant movements, and lighting conditions are controllable more often than not. Besides, when it comes to poor lighting correction, there’s already an AI for that as well.
We can already forge voices with enough precision to successfully impersonate other people to the tune of $243,000 in theft, so anonymizing voices certainly does not create an additional hurdle. We’ve never required artificial intelligence for altering voices and more thorough processes for vocal anonymity exist as well. Now, we have a good start with video. If you want to try it for yourself, you can access the source code on GitHub.
Continue reading

Google Pixel Slate Owners Report Failing Flash Storage
Google's product support forums are flooded with angry Pixel Slate owners who say their devices are running into frequent, crippling storage errors.

Astronomers Might Finally Know the Source of Fast Radio Bursts
A trio of new studies report on an FRB within our own galaxy. Because this one was so much closer than past signals, scientists were able to track it to a particular type of neutron star known as a magnetar.

How to Build a Face Mask Detector With a Jetson Nano 2GB and AlwaysAI
Nvidia continues to make AI at the edge more affordable and easier to deploy. So instead of simply running through the benchmarks to review the new Jetson Nano 2GB, I decided to tackle the DIY project of building my own face mask detector.

Apple’s New M1 SoC Looks Great, Is Not Faster Than 98 Percent of PC Laptops
Apple's new M1 silicon really looks amazing, but it isn't faster than 98 percent of the PCs sold last year, despite what the company claims.