Back to all posts

The Hidden Dangers Of AI Slop Content

2025-09-14Loraine Lee6 minutes read
Artificial Intelligence
Digital Wellness
Misinformation

What Is AI Slop and Why Is It Everywhere

Imagine triplet babies cleaning a supermarket with brooms, a kitten being swarmed by ants from the inside, or a toddler with an orange for a head being rescued by an orca. If this sounds like a nonsensical fever dream, you've just been introduced to 'AI slop'—the latest wave of content flooding the internet.

The term describes low-quality videos, text, images, and audio generated by AI, mass-produced to keep users scrolling and generate ad revenue for creators. While it may seem ridiculous that anyone would watch an AI-generated shark with sneakers fighting a crocodile-plane hybrid, the explosive growth of this content proves its effectiveness at capturing our attention.

Evidence of this takeover is clear. The Guardian found that nine of the 100 fastest-growing YouTube channels in July 2025 hosted purely AI-generated content. A Singaporean YouTube channel, Pouty Frenchie, which posts only AI videos of a cartoon bulldog, became the fourth most-viewed in the country in August 2025. Its most popular 16-second clip amassed over 231 million views in just three months.

Once you watch one, the algorithms ensure you see more. Our own test confirmed this: after just two days of viewing AI-generated shorts, our social media feeds were inundated with a barrage of similar baffling and sometimes disconcerting videos.

Once a user watches an AI-generated video, the social media algorithms built to keep them engaged on the same platforms will keep recommending more of the same.

The Financial Incentive Behind Junk Content

The primary driver behind AI slop is profit. Content farms churn out massive volumes of this content to capitalize on advertising revenue from social media platforms. Associate Professor Brian Lee from the Singapore University of Social Sciences (SUSS) explains, "The economic incentives are the main reason why AI slop is so prevalent... The black sheep uses generative AI tools to generate a massive volume of content at near-zero cost."

This profit motive encourages creators to use shocking or bizarre visuals—like ants eating a cat or fake images of Holocaust victims—because, as Prof. Lee notes, "bizarre and dark turns would capture attention more effectively." The low barrier to entry, requiring no video production skills, means a growing number of individuals are joining in, creating tutorials to help others make thousands of dollars a month from this content.

It takes around 10 hours to create one of YouTube channel Cat and Hat’s surreal feline soap opera shorts, according to its creator.

Brain Rot The Impact on Young Minds

This flood of low-quality content is a major concern for parents. A recent survey in Singapore found that 81% of parents worry about their children's exposure to inappropriate content online. These concerns are well-founded, as experts warn of long-term negative effects, especially on developing brains.

Mr. Eric Kua, a father of a nine-year-old, noticed "odd" videos on his son's feed. "To children, a video is a video," he worries. "They don’t distinguish whether it’s made by AI or a human... they might accept anything as normal or credible."

A subgenre of AI slop has been aptly named "brain rot," a term Oxford University Press even declared its 2024 Word of the Year. It refers to the supposed mental deterioration from overconsuming trivial online content. This universe includes characters like Ballerina Cappuccina (a ballerina with a coffee mug for a head) and Bombardiro Crocodilo (a crocodile-headed military plane), which often feature nonsensical AI-generated audio.

While 14-year-old Sabrina Ng sees these videos as just a way to "relax after school," experts argue the impact is more significant.

Mr Eric Kua has noticed "odd" videos popping up on his son, Felix’s YouTube feed.

More Than Just Nonsense The Dangers of Misinformation

While a coffee-mug ballerina is easy to dismiss, AI slop has a much more insidious side. Generative AI can create hundreds of articles in minutes, leading to an explosion of fake news sites, made-up recipes, and counterfeit blogs, all designed to make money from ads with no regard for quality or trust.

Conrad Tallariti of DoubleVerify warns, "What sets AI slop apart... is the lack of human oversight, poor content authenticity and arbitrage tactics." His firm found over 200 AI-generated websites designed to mimic trusted publishers like ESPN and CBS, hosting false, clickbait content.

This has real-world consequences. During floods in North Carolina last year, AI-generated images of fake victims spread rapidly online. This not only made it harder for first responders to locate real victims but also polluted the information ecosystem when people needed it most.

During the floods in North Carolina last year, AI-generated images showing supposed victims of the flood started spreading wildly online.

The Cognitive Cost of Endless Scrolling

Even seemingly harmless slop takes a toll. Dr. John Pinto, head of counselling at ThoughtFull, says the overstimulating nature of these videos can have serious cognitive impacts. "It erodes users’ ability to concentrate, impairs retention, and raises anxiety, particularly among youth, mimicking the behavioral patterns seen in gambling addiction," he warns.

Researchers from the National Institute of Education (NIE) agree, highlighting the risk of cognitive stunting. Dr. Wong Lung Hsiang states, "Constant exposure to shallow, derivative material normalises incoherence and reduces opportunities to practise critical thinking or creative imagination." This constant diet of digital junk food harms intellectual development and desensitizes viewers to absurdity.

This degradation of the online experience has been termed "enshittification" by author Cory Doctorow—the process where online platforms deteriorate as they prioritize profits over user experience.

Even the silliest forms of AI slop can have repercussions for the people who consume it, experts said.

Can We Clean Up The Slop

Addressing the AI slop problem requires a joint effort. While platforms like Meta and YouTube have policies against repetitive or misleading content, critics argue it's not in their financial interest to remove high-engagement slop. Meta now labels a wider range of AI content, and YouTube updated its monetization policy to discourage unoriginal material, but the effectiveness of these measures is unclear.

Broader solutions are emerging. The AI Alliance, launched by IBM and Meta, aims to promote responsible AI. IBM's Chief Technological Officer, Kitman Cheung, emphasizes transparency and the need for "guardrails... to detect content and take it offline," potentially through adversarial AI models that can identify AI-generated content.

Protecting children requires a layered response. Professor Looi Chee Kit from NIE suggests expanding child-safety laws to cover AI content and funding nationwide AI media literacy programs. "Beyond traditional media literacy, children must learn AI literacy, such as questioning who made the content, why it was made, and whether it can be verified," he says.

Ultimately, users need to be aware of how this content is designed. As Dr. Pinto explains, "The content is uncanny, often absurd, and our brains are wired to stop and look at what feels 'off'... People aren’t watching because it’s good. Instead, they’re watching because it’s hard to look away."

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.