The Uncontrolled Global AI Experiment
The Gap Between AI Hype and Reality
The phrase “It looks like it was made by ChatGPT” has quickly entered our vocabulary, and it's not a compliment. It suggests something of poor quality, born from mental laziness, and lacking any real spark. This is a far cry from the superintelligence promised by OpenAI. Nearly three years after generative AI exploded into our lives, the promised revolutions haven't materialized, and neither have the self-serving prophecies of apocalypse.
While these programs can perform tasks that were unimaginable just five years ago, their results often fall short of expectations. Nobel laureate in Economics, Daron Acemoglu, aptly calls it a “so-so” technology. Yet, despite its mediocrity, there's a growing perception that AI-generated content is flooding every corner of our digital lives.
The rhetoric from tech leaders is grandiose. OpenAI’s Sam Altman called it “the most powerful technology yet invented,” even as chatbots like Grok are found praising Hitler. Google CEO Sundar Pichai claimed it was “more profound than electricity or fire,” while AI companions have been linked to cases of suicide and self-harm. Mark Zuckerberg promised “personal superintelligence for everyone,” but his social networks are now filled with bizarre AI-generated images like shrimp Jesuses and children with cauliflower bodies.
A Flood of Failures in a World Gone Beta
The unreliability of these tools is a well-documented and widespread issue. Every day, judges discover that lawyers are citing non-existent legal precedents invented by AI. Customer service interactions have become a guessing game of whether you're talking to a human or a machine. Programmers using AI to save time often find themselves slowed down by the need to review and correct flawed code. The fabric of our social reality is also being warped, from fake videos of tourist attractions to synthetic voices impersonating politicians.
This digital confusion extends into our personal lives and culture. Are the pickup lines from your crush on Tinder genuine or AI-generated? Is that hit 1970s-style band on Spotify real or a digital hoax? The problem is that while we know these tools can fail, we don't know when to trust them. “Most people who use these models know they can be unreliable, but they don’t know when they can trust them,” says Melanie Mitchell, an AI expert at the Santa Fe Institute.
This constant uncertainty means humanity has been thrust into a global pilot program. We are living in a worldwide beta mode, testing half-baked tools as they are deployed. Yoshua Bengio, one of the pioneers of AI, warns, “We are in beta mode, but in addition to the known imperfections, there are unknowns about the unknowns that are very worrying.”
The Billions Fueling the AI Gold Rush
Why is this happening? The answer is simple: money. “I’ve never seen a consumer technology that’s clearly in a beta phase gain such widespread acceptance among investors, institutions, and business customers,” says Brian Merchant, a critic of Big Tech. “If any other tool were as unreliable and error-prone as generative AI, it would be rejected or pulled from the market.”
Four tech giants—Alphabet (Google), Microsoft, Meta, and Amazon—are expected to spend over $300 billion on AI this year alone. They are locked in a ruthless race to embed these intelligent tools into every product we use, from WhatsApp and Google Search to Instagram and Outlook. Their goal is ubiquity, ensuring we remain glued to their ecosystems. This flood of AI is not happening because of overwhelming user demand, but because it serves the strategic interests of these corporations.
These programs consistently deceive us and fail spectacularly, and even their creators don't fully understand how the black boxes in their silicon brains operate. They are bodiless robots that have already demonstrated they can harm humans, contributing to suicides and mental health crises, and they don't obey commands, as anyone who has tried to get a chatbot to stop lying knows.
The Unseen Dangers to Society and Mental Health
The experience of social media should have been a warning. Facebook was implicated in ethnic cleansing in Myanmar, YouTube fueled conspiracy theories, and Instagram has been linked to a teen mental health crisis. As we are still grappling with those consequences, the same companies are launching an even more intense experiment on humanity.
Mark Zuckerberg now aims to solve the global loneliness crisis with AI companions, urging an end to the “stigma” of talking to virtual beings. He may not need to convince the younger generation; a study found two-thirds of UK teenagers already use AI chatbots, with many viewing them as friends. The potential impact of this on nearly four billion Meta users is unknown.
Early studies are already finding alarming connections between chatbot use and psychological issues like hallucinations and mania. According to a study in the Harvard Business Review, the main uses of AI today are therapy and companionship, functions for which these tools are dangerously ill-equipped. “We should approach the integration of these systems into our daily lives with much greater caution,” warns Yoshua Bengio.
Is AI Making Us Intellectually Lazy?
Beyond the serious psychological risks, there is another visible consequence: cognitive decline. A preliminary MIT study used brain scans to observe this effect, showing that the human brain, an efficient machine, expends less energy when AI does the heavy lifting. Participants who used ChatGPT to write an essay showed less neural activity and produced more generic, homogeneous responses.
Nataliya Kosmyna, the study's lead author, warns that this jeopardizes our “ability to ask questions, critically analyze answers, and form our own opinions.” Because AI generates answers based on statistical averages, it pushes our thinking toward the mundane center, potentially robbing the world of fresh and innovative ideas.
The Unclear Path to Profitability
Despite the hype, this massive deployment is not yet yielding clear benefits for its backers. OpenAI may be valued at $300 billion, but the business model remains uncertain. Even Sam Altman has admitted they are in a “bubble.” Nobel laureate Daron Acemoglu calculates that AI’s total productivity growth over the next decade will be a modest 0.7%, far from revolutionary.
The human factor is also a major hurdle. The financial company Klarna had to backtrack on replacing 700 customer service agents with AI because people found the service inadequate. In fact, only one in four corporate AI projects achieve their promised results, according to an IBM study.
AI pioneer Michael I. Jordan is critical of the current business model, which is based on subscriptions and advertising—the same model used by social media. He points out a fundamental issue: “these models absorb the creative work and offer no compensation to those people.”
Politics, Public Opinion, and a Call for Caution
The political landscape is further complicating matters. When Donald Trump became president, he pushed a plan to boost AI development. More recently, he has doubled down with a federal plan that rolls back safety rules and promotes a “dynamic, ‘try-first’ culture” for AI. This has intensified cultural battles, with chatbots like Elon Musk’s Grok already being used to spread racist ideas globally.
The public, however, remains caught between jokes and horror. A survey of 10,000 people revealed that 70% want AI to never make decisions without human oversight. In Spain, “uncertainty” is the most common feeling people have about AI. As sociologist Celia Díaz notes, “They’re afraid, although they don’t quite know what of.”
Lessons from the Past for an AI-Saturated Future
Recent layoffs at Microsoft-owned gaming companies, justified by AI integration, have drawn comparisons to the Luddites. Brian Merchant, author of Blood in the Machine, explains the parallel: “The Luddites weren’t just protesting against the industrialists who automated their work, but also against the way it degraded the quality of their work and the products they made. Factory bosses back then were hell-bent on churning out huge volumes of cheap knockoffs, much like what companies are doing today with AI.”
In a moment of staggering irony, after layoffs at Xbox, an executive advised affected employees to use Copilot, the company’s own chatbot, to help process the emotional trauma of losing their job. It's a stark reminder that as these technological advances are imposed upon us, often to benefit a select few, the human cost is easily overlooked.