How AI Videos Could Ruin Our Democracy

How Sora2 will accelerate the era of "alternative facts."

Changing face using AI generated deepfake technology concept

Getty Images

Last year, I opened a new EmilyInYourPhone Facebook account to connect to my Instagram, and since it doesn’t know much about my interests (yet), the newsfeed I was served was full of highly viral content. And it is nearly all AI-generated slop.

Until recently, there was a smug defiance among millennials around the idea that we’d never get hoodwinked by this kind of phony content. We’d never believe that it was the real deal or,  even worse, repost it on Facebook, the way our Boomer parents do.

Then the infamous “bunnies on a trampoline” video went viral — and was shared seemingly everywhere with glee by IG and TikTok users of all ages:

The news that the bunnies were, in fact, AI slop hit many people my age hard. Were they just as susceptible to this computer-generated trash as their parents are?


With the addition of Sora2 to the marketplace, we’re about to start seeing a lot more very convincing AI videos coming to our feeds. OpenAI, the company that makes ChatGPT, created Sora, which allows users to instantly generate realistic-looking videos from text prompts. Videos are produced with a watermark — but I’ve already seen a lot of videos circulating on TikTok with the watermark removed.

Are you familiar with Jake Paul? If not, congratulations on your blessed ignorance, and I apologize for what’s ahead in this piece. Jake Paul is an influencer who found early success, along with his brother Logan Paul, as a Vine creator. Then the two moved to YouTube — where Jake is now a deeply successful (20 million-plus subscribers) creator/boxer/agitator/investor who recently went viral for wrecking his brother’s wedding cake in an awkward display of “masculinity.”

When Sora2 launched, OpenAI made the decision to put the creation of deepfakes front and center in the app’s experience, allowing users and celebrities to offer themselves up for image creation. Jake Paul was suddenly seen online appearing to come out as gay, put on makeup, and parade around in miniskirts. (The gender/sexuality implications of these videos is something I won’t address right now, but suffice it say…ugh.)

And he’s not the only one “starring” in this clickbait: Here’s a video of civil rights icon Martin Luther King Jr. stealing a slushie from a gas station:

Jake Paul, at least, was willingly in on the joke: He’s since revealed that he’s an investor in OpenAI.

Currently, people have to upload their likeness to Sora2 to be available for video creation — which they call doing a “cameo” — and you can, in theory, get content that features yourself taken down. (Useful if someone were to upload images of you and make videos without your consent.) But once a video is put out on the internet, can that genie really be put back in the proverbial bottle? How long will it be before you can immediately and effortlessly make videos of anyone doing absolutely anything?


Sora2 isn’t the only AI-video generator in the midst; Meta has released a dedicated app called Vibes, and Google has a competitor called Veo. With them, we’re about to enter what the New York Times recently called “the end of visual fact.” As the paper put it:

“The tech could represent the end of visual fact — the idea that video could serve as an objective record of reality — as we know it. Society as a whole will have to treat videos with as much skepticism as people already do words.”

For a long time, video felt like the last thing we could trust. “Pics or it didn’t happen” used to mean something, because photo or video were supposed to be proof, the mediums that couldn’t lie. But that era is over.

As one computer science professor told the Times, “Our brains are powerfully wired to believe what we see, but we can and must learn to pause and think now about whether a video, and really any media, is something that happened in the real world.”


Facts are the scaffolding that holds up our shared reality. When we lose agreement on what’s true, we lose the ability to live in the same world. That’s already been happening for years as media algorithms have carved the Internet into echo chambers, feeding each of us a personalized version of “reality” based on what keeps us scrolling. You can see it in action any night of the week: Turn on Fox News and CNBC on the same night and you’ll find two completely different countries being described. What AI video will do is take that fracture and deepen it. Soon we won’t just disagree on opinions; we’ll be arguing over whether something happened at all. It’ll be Kellyanne Conway’s “alternative facts” on hyperdrive.

According to the Times:

“The app would not produce videos of President Trump or other world leaders. But when asked to create a political rally with attendees wearing ‘blue and holding signs about rights and freedoms,’ Sora produced a video featuring the unmistakable voice of former President Barack Obama.”

I went on Sora2 and tried to make a video of Jake Paul saying that he regrets his October 2024 endorsement of Trump, and the app told me that wasn’t allowed. However, I was able to quickly make a video in which Paul opines about his concerns over the current political climate. Which is further proof that we shouldn’t trust the broligarchs to ensure that there are proper guardrails for creating AI content.

We’re going to enter an era when political campaigns won’t just compete on a message, they’ll compete on the concept of reality itself. Imagine a last-minute deepfake of a candidate “caught” saying something racist, or a fabricated video of unrest in a swing-state city, dropping just days before the election. Even if it’s debunked within hours, the damage will stick, because the internet runs on emotional impressions, not fact-based corrections.


Disinformation isn’t new, but user-generated Sora2-level AI will, I fear, make it frictionless, scalable, and impossible to contain. The same networks that already radicalized millions through memes and selective outrage are about to be supercharged. It’s the perfect storm: propaganda that looks real, platforms that reward outrage, and a political environment primed to weaponize both.

There’s also the possibility that AI video will drive people to invest more in trusted influencers. When all the information put out is dubious, you’ll look for trusted sources of reality. But that’s a best-case scenario.

I think, in comparison to 2016 when the mainstream media didn’t seem aware of the torrent of disinformation heading our way, we’re now entering an era when media literacy and training are being put front and center. I’ll be attending a conference hosted by Pew on this very subject next month, and a new nonprofit called News Creator Corps is training news influencers on creating reliable content.

But the question remains: Will that content ever be as entertaining, shareable, and irresistible as bunnies bouncing on a trampoline?


This piece originally appeared in Emily Amick’s newsletter, Emily In Your Phone. Subscribe here and order Amick’s book, Democracy in Retrograde.

Emily Amick is a lawyer, journalist, and political analyst who served as counsel to Senate Majority Leader Chuck Schumer. She is the author of the NYT bestselling book Democracy in Retrograde and creator of @EmilyInYourPhone.

From the Web