Nov 18 2020
Security

Election Results Remained Secure Under Barrage of Disinformation, Altered Video

Faked videos known as ‘deepfakes’ had less impact than expected, but untrue information and other manufactured images saturated the internet.

The spread of false information and the attempt to limit it did play a major role in the 2020 presidential election, but the type of disinformation and the way it was disseminated turned out to come from an unexpected source.

“The data … reflects a horror-movie trope: ‘The killer is inside the house,’” writes author Peter W. Singer, a strategist and senior fellow at the New America think tank, in an essay for Defense One. “In 2016, Russia drove U.S. media narratives … then shaped online discussion via thousands of bots and trolls. … But 2020 election-related misinformation was mostly a domestic affair.”

Social media companies and the U.S. intelligence community worked to cut down on the amount of misinformation that reached the public in the weeks before and the days after Nov. 3, with some success.

The Cybersecurity and Infrastructure Security Agency, part of the Department of Homeland Security, sponsored a robust “Rumor vs. Reality” page on its website, which continued to be active after Nov. 3.

In the days after the election, the agency also issued a blunt statement from its Election Infrastructure Government Coordinating Council executive committee and the Election Infrastructure Sector Coordinating Council as claims of rigged elections began to increase on social media.

“The November 3rd election was the most secure in American history,” read the statement, signed by the 10 council members. “While we know there are many unfounded claims and opportunities for misinformation about the process of our elections, we can assure you we have the utmost confidence in the security and integrity of our elections, and you should too.”

Why Deepfakes Failed to Take Hold

Earlier this year, experts were concerned specifically about “deepfake” videos, which are completely fabricated videos of politicians or other prominent figures saying and doing things that never happened.

Deepfakes are different from simple manipulated videos; the more sophisticated products use machine learning to enhance the accuracy of the final result, and they’re much harder to distinguish from reality.

Basic manipulated or misrepresentative videos were relatively common in 2020, especially after the election. One debunked video, retweeted by a member of the Trump family, falsely claimed to show ballots cast for President Trump being burned, but officials in that city proved that the videos showed sample ballots that would not count in an election.

Another, also retweeted by a member of the Trump family, claimed to show invalid ballots loaded into a wagon and rolled into a vote counting center. “The ‘ballot thief’ was my photographer,” tweeted a reporter for a local TV station. “He was bringing down equipment for our 12-hour shift.”

Why were deepfakes not as prevalent as feared? Simple edits and shameless falsehoods are easier to create — and worked just as well.

“You can think of the deepfake as the bazooka and the video splicing as a slingshot,” Hany Farid, a University of California, Berkeley professor who specializes in visual disinformation, tells NPR. “And it turns out that the slingshot works.”

Singer, who is also the author of LikeWar, which examines the impact of social media on war and politics, writes that “misinformation themes garnered tens of millions of mentions — a significant and pernicious slice of the billion-plus total mentions of election-related themes.

“These falsehoods were consumed by audiences across the country, but unevenly, especially targeting swing states,” primarily Florida, Pennsylvania and Michigan.

Technology to Detect Deepfakes Remains in Development

This doesn’t mean that the threat of deepfakes has diminished, however. Nearly 50,000 deepfake videos were found online as of June 2020, twice the number present in January 2020, according to Sensity, a visual threat intelligence company which released a report titled “The State of Deepfakes” in 2019.

The entertainment industry was the most heavily targeted, with 62.7 percent of the deepfakes found in that sector; politics accounted for 4 percent of deepfakes, Sensity reported in a blog post.

“The implication is that those who would use deepfakes as part of an online attack have not yet mastered the technology, or at least not how to avoid any breadcrumbs that would lead back to the perpetrator,” writes Gary Grossman, senior vice president of the technology practice at Edelman and global lead of the Edelman AI Center of Excellence, at VentureBeat. “These are also the most compelling reasons … that we have not seen more serious deepfakes in the current political campaigns.”

Social media and technology firms as well as the federal government continue to work on ways to detect deepfakes more readily. Facebook’s Deepfake Detection Challenge Dataset, for example, released a data set in June that contained more than 100,000 videos and several algorithms that should help researchers hone existing techniques for uncovering deepfakes.

The Defense Advanced Research Projects Agency has two programs underway to improve the detection of deepfakes: The Media Forensics, or MediFor, program is working on algorithms that can let analysts know if a photo or video is faked and how it was done; the Semantic Forensics, or SemaFor, program works to develop additional algorithms that can better identify deepfakes.

“Both SemaFor and MediFor are intended to improve defenses against adversary information operations,” states a Congressional Research Service report from August that provides an overview of the programs.

VectorMine/Getty Images
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT