The Great AI Backlash Has Begun
A popular, disturbingly lifelike video on OpenAI’s new social app shows Sam Altman sprinting from a Target with stolen computer chips, pleading with police not to take his “precious technology.” This absurdist clip, a parody of the company's own CEO, highlights a growing question on everyone's mind: what is this technology actually for?
Public patience with AI-generated media is wearing thin, a sentiment seen everywhere from graffiti-covered ads to mocking online comments. Whether it's derision for synthetic ad campaigns on YouTube or angry scribbles on AI startup posters in New York City subways, public discontent with the AI boom is becoming impossible to ignore.
The initial optimism of 2022, which saw generative AI as a tool to simplify our lives, has soured into deep cynicism. Many now feel the technology only benefits the wealthiest technologists in Silicon Valley, who have a seemingly endless supply of cash for projects that solve no real problems. Three years ago, as ChatGPT debuted, a Pew Research survey found that nearly one in five Americans viewed AI as a benefit. By 2025, that number has flipped, with 43 percent of U.S. adults now believing AI is more likely to harm them in the future.
Welcome to the Era of AI Slop
As AI proliferates, public skepticism is escalating into open hostility. Friend, a startup that launched a massive $1 million ad campaign across the New York City subway system, has been a primary target. Most of its ads were defaced with graffiti calling the product “surveillance capitalism” and urging people to “get real friends.” One tag bluntly stated, "AI doesn't care if you live or die."
Other brands are facing a similar response. Skechers was criticized for an AI-generated campaign featuring a distorted woman, which was widely dismissed as lazy. Many of these posters were tagged with “slop,” a term now used to describe the cheap, joyless flood of AI content.
“The idea of authenticity has long been at the center of the social media promise... But a lot of AI-generated content is not following that logic,” explained Natalia Stanusch, a researcher at the nonprofit AI Forensics. She told Newsweek that “with this flood of content made using generative AI, there is a threat of social media becoming less social and users are noticing this trend.”
From Hype to Hostility
The skepticism toward generative artificial intelligence is growing on all sides. What was once promised as a tool for innovation in the arts now feels more like market saturation. The friction is not just about quality; it's about what the technology represents.
In the entertainment world, backlash has erupted as artists find their voices and likenesses cloned without consent. After an AI-generated song mimicking his voice went viral, rapper Bad Bunny told his 19 million WhatsApp followers, “you don’t deserve to be my friends.” Drake and The Weeknd faced similar issues, with their AI replicas being removed from streaming platforms after public outcry.
“The public is finally starting to catch on,” said Gary Marcus, a professor emeritus at NYU and a vocal critic of the field. “Generative AI itself may be a fad and certainly has been wildly oversold.”
This saturation, critics argue, is driven by companies replacing human labor under the banner of innovation. Alex Hanna, director of research at the Distributed AI Research Institute (DAIR), noted that the narrative of AI as an inevitable future is used to dismiss valid questions. “It becomes an excuse to displace workers, to automate without accountability, and with serious questions about its impact on the environment,” Hanna said. “Companies want to make it look like AI is magic, but behind that magic is a labor force, data that’s been extracted without consent and an entire system built on exploitation.”
This resistance has even created its own language. The term “clanker,” borrowed from Star Wars, has become a popular Gen Z meme-slur for AI systems replacing human jobs, reflecting deep anxieties about labor displacement.
Still, some experts urge a balanced perspective. “The robots are coming, and they’re coming for everyone’s jobs," said Adam Dorr of RethinkX. “But in the longer term, AI could take over the dangerous, miserable jobs we’ve never wanted to do.” The key challenge, he notes, is navigating this transition safely.
Is the AI Investment Bubble About to Burst?
From mental health chatbots to toilet cameras that analyze feces, AI is everywhere, and billions of dollars continue to pour in. But as saturation grows, what investors see as innovation, the public is starting to see as a bubble.
In the first half of 2025 alone, global investment in AI infrastructure hit $320 billion. The Trump administration is championing the $500 billion Stargate AI initiative, with backing from giants like Meta, Amazon, and OpenAI. The president has declared, “We will win the AI race just like we did the space race.”
However, many experts are skeptical that the numbers add up. Andrew Odlyzko, a professor at the University of Minnesota, warns that the projected spending on AI is outpacing plausible future economic returns. He points to “circular investment patterns,” where AI companies fund each other without sufficient real customer demand. “If there was a big rush of regular non-AI companies paying a lot for AI services, that would be different," Odlyzko said. "But there is no sign of it.”
At scale, AI remains deeply unprofitable. A recent report from Bain predicted the industry needs to generate $2 trillion in annual revenue by 2030 to meet data center demands—a shortfall of around $800 billion.
“There is a lack of deep value,” tech columnist Ed Zitron told Newsweek. “The model is unsustainable.” With billions of dollars and national policy at stake, even skeptics agree that when the AI bubble finally bursts, its impact will be felt far beyond Silicon Valley.