The technology could be used to target key voters.
Artificial intelligence has been transforming nearly every part of society, and now it could be a deciding factor in our upcoming elections. Advances in this digital technology are providing new (and faster) tools for political messaging — which could profoundly impact how Americans vote.
With the election just a few months away, the 2024 presidential race has taken several unexpected turns. In what marked a stunning announcement, President Biden ended his campaign for a second term and endorsed Vice President Kamala Harris to run in his place. A CNN poll shows that Harris narrows the gap against Trump, though the former president still holds 49 percent of the support compared to Harris’s 46 percent. That means the election’s winner could come down to just a handful of states including Georgia and Pennsylvania — as it did in 2016 and 2020. But this time, some experts predict A.I. could play a decisive role in swaying a tiny part of the American electorate because of its ability to precisely target audiences.
The technology has already been used to sway voters. In February, the Federal Communications Commission banned the use of A.I.-generated voices in robocalls. The unanimous ruling came a month after some New Hampshire residents received robocalls impersonating Biden and telling people not to vote in the New Hampshire primary.
“A.I. is going to be used to target undecided voters, and since it’s a close election, the number of undecided voters could be 5, 6, or 7 percent,” Darrell West, vice president of the Governance Studies program at the Brookings Institution, told us in January. “The technology will help identify the best way to reach those voters and what issues could move them to vote one way or the other.”
What will this look like in practice? And why are experts worried about A.I.’s impact on the electoral process? We’ll explain all that and more.
How have Democrats and Republicans used A.I.?
Believe it or not, this technology is already powering political machinations. Last year, the Republican National Committee released an A.I.-generated Biden attack ad that depicted a dystopian future awaiting us if Biden were reelected.
As a fake newsreader announces Biden’s 2024 victory, the video shows a series of realistic-looking images of dystopian disasters, including financial markets crashing and the southern border getting overrun by migrants.
West said that A.I. has advanced so rapidly and become so readily available that campaigns no longer need a software designer or video-editing expert on staff to create realistic-looking animations or images. However, Republicans aren’t the only ones taking advantage of these high-tech tools. West said he has seen them used “in attack ads on both sides.”
“A.I. has leveled the playing field to some extent,” West told us. “You don’t need expensive TV consultants to develop ads — anybody can use readily available tools, and then use social media to share them.”
Not all candidates are using A.I. for nefarious purposes, though. West pointed out that former GOP presidential candidate Asa Hutchinson’s campaign created an A.I.-generated chatbot so voters could ask questions about his stances on various issues. “Hutchinson designed an algorithm that looks at all of his speeches, statements, and any press releases,” said West. “And it will generate a response based on what he’s already said, which is a helpful way to get information out to voters.”
Much of the concern isn’t about how U.S. candidates will utilize this tech — rather how foreign adversaries could use it to sway the election. Chris Bail, a sociology and public policy professor at Duke University, predicted that Russia could leverage generative A.I. tools like ChatGPT and design responses to sow discord by posing as American voters online. For example, someone could pretend to be an angry Democrat or Republican on social media, ranting about political topics and getting fellow posters riled up. “This type of chatbot might be more effective at polarizing real social media users, though we do not yet have conclusive research to verify this speculation,” he told us in January.
How does AI lead to misinformation?
While the RNC acknowledged its use of A.I. for the video about a post-apocalyptic Biden win, West believes others — including foreign meddlers — might not. That’s leading to a real fear that machine learning could supercharge the spread of misinformation.
There’s already been a rise in deepfakes, combining the words “deep learning” and “fake.” These videos and images are created with A.I. and are intended to make people appear to say or do something they didn’t. For instance, there were A.I.-generated photos of Trump resisting arrest by New York City police following his indictment for business fraud in March of last year. More recently, TikTok pulled a video featuring some doctored audio of Harris appearing to speak incoherently. (And deepfakes have been causing trouble for years outside the political realm, including the troubling trend of A.I. “revenge porn.”)
West warns that the effects of this technology on the public at large shouldn’t be underestimated: “We may have an election that ends up getting decided based on false narratives.”
Is A.I.-generated content regulated?
A.I.-generated political ads and other campaign messaging aren’t currently regulated, but West thinks they should be. “Candidates already have to disclose who the sponsor of their ad is, so it makes sense to extend that to A.I.-related content as well,” he said.
Federal regulators agree with this sentiment: FCC Chair Jessica Rosenworcel has introduced a proposal that could require political advertisers to disclose when they use A.I.-generated content on radio and TV ads. Though it marks a broader push to regulate A.I. in politics, The Associated Press notes that these disclosure regulations don’t apply to streaming services.
As the federal government tries to get a grip on machine learning, a handful of states have already taken the A.I. issue into their own hands. In September 2019, Texas became the first state to outlaw political deepfake videos, which are punishable by up to a year in jail and as much as $4,000 in fines. Others have followed that lead: California Gov. Gavin Newsom signed legislation a month later banning the distribution of artificially created or manipulated videos within 60 days of an election unless the video carries a statement disclosing it has been altered in some way. But this law isn’t permanent (it was slated to sunset in 2023).
While these laws may be intended to protect voters, they’ve also raised some questions from civil rights groups, like the American Civil Liberties Union of California, about free speech. (The ACLU wrote a letter to Newsom saying that the bill wouldn’t solve the problem of deceptive political videos and instead would only lead to “voter confusion, malicious litigation, and repression of free speech.”) Bail pointed out that regulating A.I. misinformation is extremely difficult because there isn’t a reliable, widely used method for detecting it yet. Bad actors could easily bypass safety measures like watermarking by making their own tools to mimic them.
How to spot A.I.-generated content
In its early stages, A.I.-generated content wasn’t very convincing — synthetic images and video were not only costly to produce, but they were also crude and obvious. That’s certainly not the case anymore: Sophisticated tools can now create cloned human voices and hyper-realistic images in a matter of seconds, at minimal cost. (This includes the image at the top of this post.) Combining this capability with social media and computer-generated visuals has the potential to spread to previously unimaginable distances.
We’re already seeing the cultural impacts of this rapidly advancing technology: People are 3 percent less likely to spot false tweets generated by A.I. than those written by humans, according to a 2023 study published in Science Advances. While that might not seem like much, disinformation is poised to grow significantly as A.I. tools become even more sophisticated. (The Copenhagen Institute for Future Studies estimates that A.I.-generated content could soon account for 99 percent or more of all information on the internet.)
The good news is that there are ways to avoid getting tricked (though it can happen to even the most media-savvy, so don’t feel too bad if you’re hoodwinked). West said A.I.-generated content tends to be “really unusual or completely outrageous.” For instance, a social media post showing children learning Satanism in a library was generated by A.I. (“Protect your children, now they are infiltrating children as young Satanists and presenting Baphomet as God,” the post read.) Another trick for spotting A.I.? Check to make sure the “people” in the image have the correct number of fingers or appendages, both of which A.I. frequently flubs.
Other content may require a closer look: If you’re reading an article in which the same word or phrase is being repeated over and over again, there’s a good chance it was written by artificial intelligence. These computer-generated stories are practically stuffed to the brim with keywords to fill up space, often making them sound “off” and unnatural.
Keeping those points in mind can help you spot content created by artificial intelligence — whether it’s being presented as “shocking” cultural commentary or political info aimed at swaying the future of our country. “There has always been the potential for propaganda to influence an election,” said West, “but I think people have to be especially on guard now.”