Sifting Through A.I. Slop: What’s Real, What’s Not, and Why It Matters

A technologist explains this new and largely unwelcome form of online content.

An A.I.-generated image of a young girl looking sad, holding a puppy in a flood

This ChatGPT-generated photo of a girl holding a puppy illustrates the kind of images that make the rounds on social media, purporting to be real.

You’ve probably encountered images in your social media feeds that look like a cross between photographs and computer-generated graphics. Some are fantastical — think Shrimp Jesus — and some are believable at a quick glance — remember the little girl clutching a puppy in a boat during a flood?

These are examples of A.I. slop: low- to mid-quality content — video, images, audio, text, or a mix — created with A.I. tools, often with little regard for accuracy. It’s fast, easy, and inexpensive to make this content. A.I. slop producers typically place it on social media to exploit the economics of attention on the internet, displacing higher-quality material that could be more helpful.

A.I. slop has been increasing over the past few years. As the term “slop” indicates, that’s generally not good for people using the internet.

A.I. slop’s many forms

The Guardian published an analysis in July 2025 examining how A.I. slop is taking over YouTube’s fastest-growing channels. The journalists found that nine out of the top 100 fastest-growing channels feature A.I.-generated content like zombie football and cat soap operas.

Listening to Spotify? Be skeptical of that new band, The Velvet Sundown, which appeared on the streaming service with a creative backstory and derivative tracks. It’s A.I.-generated.

In many cases, people submit A.I. slop that’s just good enough to attract and keep users’ attention, allowing the submitter to profit from platforms that monetize streaming and view-based content.

The ease of generating content with A.I. enables people to submit low-quality articles to publications. Clarkesworld, an online science fiction magazine that accepts user submissions and pays contributors, stopped taking new submissions in 2024 because of the flood of A.I.-generated writing it was getting.

These aren’t the only places where this happens — even Wikipedia is dealing with A.I.-generated low-quality content that strains its entire community moderation system. If the organization is not successful in removing it, a key information resource people depend on is at risk.

Harms of A.I. slop

A.I.-driven slop is making its way upstream into people’s media diets as well. During Hurricane Helene, opponents of President Joe Biden cited A.I.-generated images of a displaced child clutching a puppy as evidence of the administration’s purported mishandling of the disaster response. Even when it’s apparent that content is A.I.-generated, it can still be used to spread misinformation by fooling some people who briefly glance at it.

A.I. slop also harms artists by causing job and financial losses and crowding out content made by real creators. The placement of this lower-quality A.I.-generated content is often not distinguished by the algorithms that drive social media consumption, and it displace entire classes of creators who previously made their livelihood from online content.

Wherever it’s enabled, you can flag content that’s harmful or problematic. On some platforms, you can add community notes to the content to provide context. For harmful content, you can try to report it.

Along with forcing us to be on guard for deepfakes and “inauthentic” social media accounts, A.I. is now leading to piles of dreck degrading our media environment. At least there’s a catchy name for it.


Adam Nemeroff is assistant provost for innovations in learning, teaching, and technology at Quinnipiac University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.