Researchers at UC Berkeley and USC are creating new techniques to detect deepfakes, hyper-realistic AI-generated videos of people doing or saying things they never did or said.
The technology could be used to help journalists, policy makers, and the public to stay ahead of fake videos of political or economic leaders that could be used for malicious purposes, such as swinging an election, to destabilize a financial market and even incite civil unrest and violence.
The video shows two examples of deepfakes: "face swap" and "lip-sync". It features people like Mark Zuckerberg and Elizabeth Warren, where real videos of them were taken and "face swaped". Elizabeth Warren's video was face swaped against a Saturday Night Live's video of an impersonation done by Kate McKinnon, where Warren's features were superimposed onto McKinnon's. "Algorithms control the lip movements and facial expressions to match McKinnnon's," said the video.
Later, a video of Barack Obama in which the "lip-sync" video was replaced with the audio from the real video, the lips and cheeks were modified to match the new audio.
The machine learning methodology compares a subject's real movements, such as head tilts, eyebrows, chin motion, to the deepfake video. The methodology detected deepfakes 92-96 percent of the time.