How Does a Deepfake Video Work?
“Deepfakes employ two separate sets of algorithms acting in conjunction: the first algorithm creates a video, and the second one tries to determine if the video is real or not,” according to Merriam-Webster’s Words We’re Watching blog.
“If the second algorithm can tell that the video is fake, the first algorithm tries again, having learned from the second algorithm what not to do. And the pair of algorithms go around and around until they spit out a result that the programmers feel is sufficiently real-looking.”
The technique first got attention in 2014 with the publication of a scientific paper describing that process in detail, naming it a “generative adversarial network.” The term “deepfake” originated in 2017 on Reddit, where users were grafting female celebrities’ faces into existing porn videos.
VIDEO: What are the seldomly asked questions around emerging tech?
Notorious Deepfake Examples
Few deepfakes have been deployed in the political realm. Carefully edited videos, such as the one that altered audio to make House Speaker Nancy Pelosi sound as if she was slurring her words, are still the norm, as are altered still images.
In 2018, Oscar-winning filmmaker Jordan Peele collaborated with Buzzfeed to show how easily a politician’s image could be manipulated and to warn people not to believe every video they saw. The video showed former President Barack Obama appearing to explain deepfakes with not-safe-for-work language, but the voice is revealed to be Peele’s.
A deepfake of Facebook CEO Mark Zuckerberg caught experts’ attention last summer; designed to look like an actual news clip, the video purports to show Zuckerberg crowing over his control of the world’s data.
“Things have changed,” David Doermann, director of the University at Buffalo Artificial Intelligence Institute and a former computer vision program manager at the Defense Advanced Research Projects Agency (DARPA), told a House of Representatives committee last summer. “The process of content creation and media manipulation can be automated. Software can be downloaded for free from online repositories, it can be run on your average desktop computer with a GPU card by a high school student and it can produce personalized, high-quality video and audio overnight.”
How to Spot a Deepfake Video
How do you know if you’re watching a deepfake? There are tells — “the shape of light and shadows, the angles and blurring of facial features or the softness and weight of clothing and hair,” reports The Washington Post. “But in some cases, a trained video editor can go through the fake to smooth out possible errors, making it that much harder to assess.”
Researchers at the University of California, Berkeley and the University of Southern California developed a method to detect deepfakes, published last year in a paper called “Protecting World Leaders Against Deep Fakes.” They used video of politicians as well as their Saturday Night Live impersonators to create a baseline for observation; minor facial movements such as nose wrinkling or tightening the lips provide a key to whether the video is real or not.
It’s not foolproof, the researchers write in their paper. The more facial features that are included in the baseline, the less accurate the method gets; and it’s only effective in certain contexts, such as when the subject is looking toward the camera.
Social media and tech companies are working with universities to improve detection methods. Facebook, Microsoft and several leading universities have begun a Deepfake Detection Challenge, which has called for development of technologies to detect deepfakes; the challenge ends on March 31. In addition, Google has created a database of faked faces that can support deepfake detection efforts.
Other researchers are developing forensic techniques to spot these videos, using a particular form of machine learning to detect pixel artifacts left over after the alterations and to compare suspected fakes with real videos.
Much of the support for such research is coming from DARPA’s Media Forensics program, an “attempt to level the digital imagery playing field, which currently favors the manipulator,” writes Matt Turek, the program manager for DARPA’s Information Innovation Office.
“The forensic tools used today lack robustness and scalability, and address only some aspects of media authentication; an end-to-end platform to perform a complete and automated forensic analysis does not exist,” he adds.
How to Prevent Deepfake Videos from Spreading
In the meantime, the federal government is turning to policy and legislation to attempt to prevent deepfakes from making inroads online. (Among social media companies, Facebook has announced it will remove deepfakes unless they are clearly satire; Twitter has banned them; and Reddit banned and took down the r/deepfakes subreddit.)
In October 2019, the Senate passed a bill that would require the Department of Homeland Security to issue an annual report on deepfakes, examining the threat and suggesting technologies to combat them. The Deepfake Report Act is currently with a House committee. DHS’ own public-private analytic exchange group, which brings together members of the private sector and government analysts to examine security issues, will be taking on the topic during its 2020 meetings, and may issue its own report in the fourth quarter.
Best practices for detection are still in the early phases: “There’s no money to be made out of detecting these things,” Nasir Memon, a professor of computer science and engineering at New York University, told The Washington Post.
John Villasenor, a professor of electrical engineering, public affairs, law and management at UCLA, writes on the Brookings Institution’s TechTank blog that legal measures can be taken against deepfakes, including charges of copyright infringement or defamation, but that they happen after the fact and do not prevent the videos from spreading in the first place.
“My suspicion is, as the technology evolves, we’ll want to consider large-scale regulation to deal with some of the likely issues,” Jack Clark, policy director of OpenAI, told a House committee last summer. “But I think that today we need to concentrate on actions that give all media consumers better information about the entire intersection of fake media, artificial intelligence and digital platforms.”