How AI deepfakes test voter confidence and election integrity

|
Evan Vucci/AP/File
Joe Biden signs an executive order on artificial intelligence at the White House, Oct. 30, 2023, in Washington, while Vice President Kamala Harris looks on.
  • Quick Read
  • Deep Read ( 4 Min. )

With more than 60 countries holding elections, 2024 is becoming a record year for voting. But artificial intelligence threatens to undermine this democratic wave with audio and images that may look real, but are fake.

Often, AI deepfakes are meant purely to entertain. Others, however, aim to sway elections and diminish trust in democracy.

Why We Wrote This

Artificial intelligence fraud has popped up in elections around the world this year, including America’s upcoming presidential election. These deepfakes are often meant to amuse, but they also aim to sway elections and sow division.

Since last fall, deepfakes have turned up in elections in Slovakia, Bangladesh, Nigeria, Turkey, the United Kingdom, France, India, and elsewhere.

In the United States presidential election, one of the first deepfakes to gain attention was a robocall urging Democrats not to vote in the New Hampshire primary. Since then, fake images have circulated depicting former President Donald Trump dancing with an underage girl, Vice President Kamala Harris in a communist uniform, and singer Taylor Swift endorsing Mr. Trump. Ms. Swift backed Ms. Harris.

Researchers say AI has not been effective in swaying elections – at least not yet. Attention to deepfakes this election cycle should help voters discern what is real and what isn’t.

“The more [officials] see the potential impact, the more they’re going to allocate resources towards countermeasures,” says Siwei Lyu, a computer science professor at the University at Buffalo. “I’m cautiously optimistic.”

With more than 60 countries – accounting for nearly half the world’s population – holding elections, 2024 is turning out to be a record year for voting. Artificial intelligence threatens to undermine this democratic wave with audio and images that may look realistic, but are fake.

Often, these AI deepfakes are meant purely to entertain, lampooning one candidate or another. But others are intended to sway elections, deepen political divisions, and diminish trust in democracy itself. 

Here’s a look at what has happened so far, and where the technology might be headed.

Why We Wrote This

Artificial intelligence fraud has popped up in elections around the world this year, including America’s upcoming presidential election. These deepfakes are often meant to amuse, but they also aim to sway elections and sow division.

How has AI been used in this election cycle?

The phenomenon first popped up in a big way in Slovakia’s elections last fall. A deepfake audio of an interview with pro-
European leader Michal Šimečka presented him supposedly bragging about rigging the election and proposing to raise the price of beer. The manipulated audio went viral, and Mr. Šimečka’s party, though leading in the polls, lost to the party led by pro-Russian Robert Fico. Since then, deepfakes have turned up in elections in Bangladesh, Nigeria, Turkey, the United Kingdom, France, India, and elsewhere.

In the United States presidential election, one of the first instances of manipulation to gain national attention was a January robocall, ostensibly from President Joe Biden, urging Democrats not to vote in the New Hampshire primary. The prank’s funder, a Democrat, said he did it as a warning that AI is a threat to elections. Since then, fake images have circulated depicting former President Donald Trump dancing with an underage girl, Vice President Kamala Harris wearing a communist uniform, and singer Taylor Swift endorsing Mr. Trump. Ms. Swift later endorsed Ms. Harris. 

How effective has the AI been? 

Not enough to sway elections, say researchers who have studied the issue, at least not yet. Even in Slovakia’s case, some recent commentary has suggested that the factors behind Mr. Fico’s victory may be broader than a simple deepfake.  

Warnings about AI aren’t new. What has changed? 

Many nations are holding their first set of elections since tech company OpenAI released ChatGPT to the public in late November 2022. ChatGPT was the first widely available “chatbot” using generative AI. That’s a technology that allows computers to process human questions with such a degree of certainty that they can generate answers that sound as if they’re coming from a person. They can create not only realistic text but also audio, still images, and video.

Vadim Ghirda/AP/File
Moldovan President Maia Sandu greets Ukrainian President Volodymyr Zelenskyy in Bulboaca, Moldova, June 1, 2023. She has been a frequent subject of misinformation created with artificial intelligence.

Can computers now think? 

Not like people. Instead, they process so much material from the internet and elsewhere that they can predict the next word in a sentence with a high degree of accuracy or conjure an image that mimics human creativity, even if they’re not conscious in the way people are.

Do generative AI deepfakes differ from misinformation in previous elections?

The difference is in the speed and volume of fake content generated. Deepfake videos, in particular, used to require expensive equipment, lots of expertise, and time. With generative AI, anyone can generate a fake video within minutes. They don’t even have to film anything. They just describe what they want in words, and the computers generate a realistic video on the spot. This technological advance has led to surges of fake content around specific events.

For example, the July assassination attempt against former President Trump inspired, in three days, as much election-
related AI data as in all of May, says Emilio Ferrara, a computer science professor at the University of Southern California who is tracking the technology’s use in U.S. elections.

Has the U.S. been inundated with deepfakes designed to sway voters? 

Strangely, no. “I was worried that we’d be massively overwhelmed,” says V.S. Subrahmanian, a computer scientist at Northwestern University who, in July, set up a service to help journalists detect deepfakes. “But we’re not.” In a newly released tracker of AI manipulation in elections, the German Marshall Fund lists fewer than 30 incidents involving the current U.S. presidential race. 

Instead, many of the deepfakes that have emerged are obvious parodies. Since spreading unsubstantiated rumors about Haitian immigrants eating pets and wildfowl in an Ohio community, Mr. Trump has been depicted in various poses saving cats and ducks. After President Biden dropped out of the race, doctored audio had Vice President Harris saying, “I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate. Thanks, Joe.”

That doesn’t mean a deluge of more malign deepfakes won’t occur nearer Election Day, warns Siwei Lyu, a computer science and engineering professor at the University at Buffalo. But, “It’s too early to call,” he says.

The infamous fake Slovak audio appeared two days before that country’s parliamentary election. On the eve of Pakistan’s elections in February, the main opposition candidate was falsely depicted urging his followers not to vote. Opponents of Mr. Trump or Ms. Harris may be holding their fire.

In a new report, Professor Ferrara and his colleagues charge that a small but active number of users on the social platform X are coordinating activities and promoting one another’s fake stories to attack the integrity of democratic processes.

What about threats from overseas?

Authoritarian governments such as in China, Russia, and Iran appear to have adopted a similar focus. While they may try to influence particular races, their influence campaigns, including deepfakes, aim to promote political divisiveness in the U.S. and denigrate democracy as a solution to the world’s problems. “If you undermine the credibility of the biggest democracy in the world, then you are really undermining democracy at its core,” Professor Ferrara says.

Is the U.S. able to counter these AI-driven efforts?

Researchers are heartened by the attention given to deepfakes in this election cycle, which should help voters become more discerning about assessing the reality of what they see and hear. The level of attention by federal and state officials has also soared. In an Oct. 7 briefing with reporters, intelligence officials warned that foreign actors would likely try to sow doubt about voting results even after the election, especially if the races are close.

“The more [officials] see the potential impact, the more they’re going to allocate resources towards countermeasures,” says Professor Lyu. “I’m cautiously optimistic.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to How AI deepfakes test voter confidence and election integrity
Read this article in
https://www.csmonitor.com/USA/Politics/2024/1016/ai-deepfakes-election-voters-trump-harris
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe