New tool debunks deepfakes of Trump and other world leaders

This is a real photo of US President Donald Trump speaking with Russia’s President Vladimir Putin in 2017.


Deepfakes of world leaders may be easier to debunk using a new detection method, according to an academic paper Wednesday. Researchers created profiles of the unique expressions and head movements made by powerful people — like Donald Trump, Hillary Clinton, Barack Obama and US presidential hopeful Elizabeth Warren — when they talk. That “soft biometric model” helped detect a range of deepfakes, the kind of manipulated videos powered by artificial intelligence that have sprung up lately featuring Mark Zuckerberg and others. 

(The rest of us are still out of luck if anybody makes a deepfake of us, though.)

The researchers spelled out the threat of deepfakes in stark terms. 

“With relatively modest amounts of data and computing power, the average person can, for example, create a video of a world leader confessing to illegal activity leading to a constitutional crisis, a military leader saying something racially insensitive leading to civil unrest in an area of military activity, or a corporate titan claiming that their profits are weak leading to global stock manipulation,” they wrote. 

Deepfakes are video forgeries that can make people appear to be doing or saying things they never did, like Photoshop for video on steroids. Digital manipulation of video has existed for decades, but deepfake software powered by artificial intelligence has made doctored clips easier for anyone to make and harder to detect as fake. 

The new research was released the day before US lawmakers are set to hold their first hearing on the threat of deepfakes. The House Intelligence committee will hear from experts Thursday morning about the national security challenges of manipulated media created with artificial intelligence, like deepfakes. 

The researchers’ new technique created a model for how world leaders naturally speak, based on facial/head features like nose wrinkling, lip tightening and head rolling. That model served as a sort of fingerprint for how each individual talks.

“Although not visually apparent, these correlations are often violated by the nature of how deepfake videos are created and can, therefore, be used for authentication,” they wrote. 


An example of the intensity of Barack Obama’s eyebrow lift measured over a 250-frame video clip.

Shruti Agarwal/Hany Farid/Yuming Gu/Mingming He/Koki Nagano/Hao Li

The three main types of deepfakes — referred to here as face swaps, lip syncs and puppet-master fakes — all rely on rewriting how a victim’s entire face or mouth moves. But all the subtleties of that victim’s pseudo-fingerprint are too hard to replicate even by the best impersonator. So the researchers are able to apply this fingerprint they developed to distinguish between real and fake videos.

But their detection technique had shortcomings too.  It was less reliable when the person of interest in the video is consistently looking away from the camera rather than addressing it directly. So a live interview looking off-camera would be harder to detect under this method.  

The latest paper — which was based on research funded by Google, Microsoft and DARPA — is titled “Protecting World Leaders Against Deep Fakes” and published by researchers at the University of California, Berkeley, and the University of Southern California.

Now playing:
Watch this:

Senate takes on deep fakes with Sheryl Sandberg and Jack…


Source link

Leave a Reply

Subscribe to our newsletter

Join our monthly newsletter and never miss out on new stories and promotions.
Techhnews will use the information you provide on this form to be in touch with you and to provide updates and marketing.

You can change your mind at any time by clicking the unsubscribe link in the footer of any email you receive from us, or by contacting us at We will treat your information with respect.

%d bloggers like this: