How AI Could Sway the 2024 Election

Midjourney/KCM

The technology could be used to target key voters.

Artificial intelligence has been transforming nearly every part of society, and now it could be a deciding factor in our upcoming elections. Advances in this digital technology are providing new (and faster) tools for political messaging — which could have a profound impact on how Americans vote. 

Even though we’re still a year out, the 2024 election is already shaping up to be another close one, with President Biden and Donald Trump expected to have a contentious rematch. A poll by CNN shows Trump narrowly leading Biden in a hypothetical face-off, 49 percent to 45 percent. And neither candidate has much room for expanding their reach: According to that same poll, 51 percent of voters nationwide say there’s no chance they’d vote for Biden, while 48 percent say they’d never cast a ballot for Trump. That means the winner of the election could come down to just a handful of states including Georgia and Pennsylvania — as it did in both 2016 and 2020. Only this time around, some experts predict AI could play a decisive role in swaying a very small part of the American electorate, because of its ability to precisely target audiences. 

The technology is already being used to sway voters in the New Hampshire Presidential Primary Election in an attempt to sway voters in the state. The state’s attorney general’s office opened an investigation into an “unlawful attempt” of voter suppression after it was reported that a robocall impersonating Biden was telling people not to vote in Tuesday’s primary. It’s unclear who’s behind these calls and Trump’s team has denied any involvement.

“AI is going to be used to target undecided voters, and since it’s a close election, the number of undecided voters could be 5, 6, or 7 percent,” says Darrell West, vice president of the Governance Studies program at the Brookings Institution. “The technology will help identify the best way to reach those voters, and what issues could move them to vote one way or the other.” 

What will this look like in practice? And why are experts worried about AI’s impact on the electoral process? Read on. 

How have Democrats and Republicans used AI?

Believe it or not, this technology is already powering political machinations: After President Biden announced his bid for reelection in April, the Republican National Committee released what it called “an AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024.” As a fake newsreader announces Biden’s 2024 victory, the video shows a series of realistic-looking images of dystopian disasters, including financial markets crashing and the southern border getting overrun by migrants.

West says that AI has advanced so rapidly, and become so readily available, that campaigns no longer need a software designer or video-editing expert on staff to create realistic-looking animations or images. But Republicans aren’t the only ones taking advantage of these high-tech tools: West says he has seen them used “in attack ads on both sides.” 

“AI has leveled the playing field to some extent,” West tells us. “You don’t need expensive TV consultants to develop ads — anybody can use readily available tools, and then use social media to share them.” 

Not all candidates are using AI for nefarious purposes, though. West points out that former GOP presidential candidate Asa Hutchinson’s campaign created an AI-generated chatbot so voters can ask questions about his stances on various issues. “Hutchinson designed an algorithm that looks at all of his speeches, statements, and any press releases,” says West. “And it will generate a response based on what he’s already said, which is a helpful way to get information out to voters.” 

Much of the concern isn’t about so much how U.S. candidates will utilize it — but rather how foreign adversaries could use AI to sway the election. Chris Bail, a sociology and public policy professor at Duke University, predicts that Russia could leverage generative AI tools like ChatGPT, and design responses to sow discord by posing as American voters online. For example, someone could pretend to be an angry Democrat or Republican on social media, ranting about political topics and getting fellow posters riled up. “This type of chatbot might be more effective at polarizing real social media users, though we do not yet have conclusive research to verify this speculation,” he says.

How does AI lead to misinformation? 

While the RNC acknowledged its use of AI for the video about a post-apocalyptic Biden win, West says others — including foreign meddlers — might not. That’s leading to a real fear that AI could supercharge the spread of misinformation. 

We may have an election that ends up getting decided based on false narratives.

There’s already been a rise in deepfakes, a combination of the words “deep learning” and “fake.” These videos and images are created with AI, and intended to make people appear to say or do something they didn’t: An altered video showing Biden giving a speech attacking transgender people went viral earlier this year, as did artificially created photos of Trump resisting arrest by New York City police following his indictment for business fraud. (And deepfakes have been causing trouble for years outside the political realm, including the troubling trend of AI “revenge porn.”)

West warns that the effects of this technology on the public at large shouldn’t be underestimated: “We may have an election that ends up getting decided based on false narratives.” We’re already seeing AI’s spread of misinformation New Hampshire’s primary, where some voters received robocalls telling them that they shouldn’t participate in the primary and save their vote for the general election in November. The message even sounds like Biden: “What a bunch of malarkey,” it begins, echoing one of the president’s favorite phrases. While officials don’t know who’s behind these calls, Kathy Sullivan, a former New Hampshire Democratic Party chair, told NBC News that “it’s obviously somebody who wants to hurt Joe Biden.”

Is AI-generated content regulated?

AI-generated political ads and other campaign messaging aren’t currently regulated, but West thinks they should be. “Candidates already have to disclose who the sponsor of their ad is, so it makes sense to extend that to AI-related content as well,” he says.

Some lawmakers agree with his sentiment: Democratic Rep. Yvette Clarke has sponsored legislation that would require anyone creating synthetic images to add a watermark indicating that fact. Bail believes labeling AI content is a good first step towards addressing how the technology has perpetuated the spread of false information. 

The White House has even taken note of the dangers of AI: Just last month, Biden signed a sweeping executive order directing federal agencies, including the Department of Homeland Security, to develop safety guidelines around its use. The order also covers user privacy, civil rights, consumer protections, and workers’ rights. While Bail and other experts have praised this move, this isn’t a permanent law — and will only last during the Biden administration.

As the federal government tries to get a grip on machine-learning, a handful of states have already taken the AI issue into their own hands. In September 2019, Texas became the first state to outlaw political deepfake videos, which are punishable by up to a year in jail and as much as $4,000 in fines. Others have followed its lead: California Gov. Gavin Newsom signed legislation a month later banning the distribution of artificially created or manipulated videos within 60 days of an election unless the video carries a statement disclosing it has been altered in some way. But this law isn’t permanent (it was slated to sunset this year). 

Senate Majority Leader Charles Schumer (D-NY) talks to reporters in between sessions of the Artificial Intelligence Insight Forum he and a group of bipartisan senators hosted in the Russell Senate Office Buildings on Capitol Hill on November 08, 2023 in Washington, DC. (Photo by Chip Somodevilla/Getty Images)

While these laws may be intended to protect voters, they’ve also raised some questions from civil rights groups, like the American Civil Liberties Union of California, about free speech. (The ACLU issued a wrote a letter to Newsom saying that the bill wouldn’t solve the problem of deceptive political videos and instead would only lead to “voter confusion, malicious litigation, and repression of free speech.”) Bail points out that regulating AI misinformation is extremely difficult to regulate because there isn’t a reliable, widely used method yet for detecting it, and bad actors could easily bypass safety measures like watermarking by making their own tools to mimic them. 

What are ways to spot AI-generated content?

At first, AI-generated content wasn’t very convincing — synthetic images and video weren’t only costly to produce, they were also crude and unconvincing. That’s certainly not the case anymore: Sophisticated tools can now create cloned human voices and hyper-realistic images in a matter of seconds, at minimal cost. (Including the image at the top of this post.) Combine this capability with social media, and computer-generated visuals have the potential to spread to previously unimaginable distances. 

We’re already seeing the cultural impacts of this rapidly advancing technology: People are now 3 percent less likely to spot false tweets generated by AI than those written by humans, according to a study published in Science Advances. While that might not seem like much, disinformation is poised to grow significantly as AI tools become even more sophisticated. (The Copenhagen Institute for Future Studies estimates that AI-generated content could soon account for 99 percent or more of all information on the internet.)

The good news is that there are ways to avoid getting tricked (though it can happen to even the most media-savvy, so don’t feel too bad if you’re hoodwinked). West says AI-generated content tends to be “really unusual or completely outrageous.” For instance, a social media post showing children learning Satanism in a library was generated by AI. (“Protect your children, now they are infiltrating children as young satanists and presenting Baphomet as God,” the post read.) Another trick for spotting AI? Check to make sure the “people” in the image have the correct number of fingers or appendages, both of which AI frequently flubs.

Other content may require a closer look: If you’re reading an article in which the same word or phrase is being repeated over and over again, there’s a good chance it was written by artificial intelligence. These computer-generated stories are practically stuffed to the brim with keywords to fill up space, often making them sound “off” and unnatural. 

Keeping those points in mind can help you spot content created by artificial intelligence — whether it’s being presented as “shocking” cultural commentary or political info aimed at swaying the future of our country. “There has always been the potential for propaganda to influence an election,” says West, “but I think people have to be especially on guard now.”