Deepfake Tech Can Now Anonymize Your Face to Protect Privacy
Deepfake videos have demonstrated their applications in entertainment—both acceptably and controversially—but these general adversarial networks (GANs) still have a long way to go before they offer convincing results. This has led to a lack of practical applications and plenty of paranoia, but we’re beginning to see efforts to employ deepfake technology in ways that can help people protect themselves. A recent paper published at the International Symposium on Visual Computing demonstrates how deepfakes could help protect the right to privacy before they become a tool used to cause harm.
The paper utilizes face-swapping to anonymize the speaker’s appearance. Although the authors were not the first to consider this application, initial work simply transplanted expressions onto an existing face that consented to the swap. This new method, instead, replaces someone’s existing face with a uniquely generated one from a data set of 1.5 million face images. In theory, the new face won’t match any face in reality.
While the GAN produces suitable results for photos, it still struggles with replacing faces in video. This is likely because the network has to generate a “new” face for each frame. Maintaining consistency for a non-existent face isn’t an easy task in theory or in practice.
For the purposes of anonymizing a subject in a video, however, a glitchy look doesn’t matter too much. After all, the purposes of this GAN isn’t to fool anyone but rather obscure a person’s face without losing their expression. By blocking out a person’s face with a box (as seen on the left side of the GIF above), we can’t identify them but we also know very little about what they’re attempting to communicate.
In circumstances where anonymity is vital but expression can make a difference, such as anonymizing the appearance of sources in the news or documentary films that could put a subject at risk by revealing their identity, this method could be very useful and employed today. Its only notable issues include glitches that occur in poor lighting conditions or when the subject makes significant movements. Further work will likely resolve these problems in the coming years, but applying this method for its expected purposes won’t easily encounter these drawbacks. After all, interview subjects typically do not make any significant movements, and lighting conditions are controllable more often than not. Besides, when it comes to poor lighting correction, there’s already an AI for that as well.
We can already forge voices with enough precision to successfully impersonate other people to the tune of $243,000 in theft, so anonymizing voices certainly does not create an additional hurdle. We’ve never required artificial intelligence for altering voices and more thorough processes for vocal anonymity exist as well. Now, we have a good start with video. If you want to try it for yourself, you can access the source code on GitHub.
Continue reading
Google Bans Deepfake Training on Colab
Google has quietly made the decision to ban users from creating deepfakes on its Colaboratory computing service.
EU: Meta, Google, and Twitter Can Either Combat Deepfakes or Pay Massive Fines
Tech giants that fail to meet the EU's new disinformation prevention guidelines will have to pay fees equal to six percent of their global turnover.
Deepfakes Are Interviewing For Tech Jobs
Bad actors are impersonating other people via deepfakes to weasel their way into remote work positions, according to the FBI.
Apple Granted Patent for Deepfakes Based on Reference Images
You either die a deepfake-defeating hero or live long enough to see yourself become their creator.