Why Deepfakes Failed to Take Hold
Earlier this year, experts were concerned specifically about “deepfake” videos, which are completely fabricated videos of politicians or other prominent figures saying and doing things that never happened.
Deepfakes are different from simple manipulated videos; the more sophisticated products use machine learning to enhance the accuracy of the final result, and they’re much harder to distinguish from reality.
Basic manipulated or misrepresentative videos were relatively common in 2020, especially after the election. One debunked video, retweeted by a member of the Trump family, falsely claimed to show ballots cast for President Trump being burned, but officials in that city proved that the videos showed sample ballots that would not count in an election.
Another, also retweeted by a member of the Trump family, claimed to show invalid ballots loaded into a wagon and rolled into a vote counting center. “The ‘ballot thief’ was my photographer,” tweeted a reporter for a local TV station. “He was bringing down equipment for our 12-hour shift.”
Why were deepfakes not as prevalent as feared? Simple edits and shameless falsehoods are easier to create — and worked just as well.
“You can think of the deepfake as the bazooka and the video splicing as a slingshot,” Hany Farid, a University of California, Berkeley professor who specializes in visual disinformation, tells NPR. “And it turns out that the slingshot works.”
Singer, who is also the author of LikeWar, which examines the impact of social media on war and politics, writes that “misinformation themes garnered tens of millions of mentions — a significant and pernicious slice of the billion-plus total mentions of election-related themes.
“These falsehoods were consumed by audiences across the country, but unevenly, especially targeting swing states,” primarily Florida, Pennsylvania and Michigan.