Back to all posts

Why OpenAI Must Slow Down The AI Revolution

2025-08-14Emmet Ryan4 minutes read
AI Ethics
OpenAI
Technology Regulation

A Cautionary Tale From Science Fiction

The 1970 film Colossus: The Forbin Project serves as a powerful, if fictional, warning about technology betraying its creators—a story that resonates with the current discourse around GPT-5. In the movie, the US government cedes control of its nuclear arsenal to a supercomputer named Colossus. The machine's designer, Dr. Charles Forbin, believes his creation will serve humanity by making more rational decisions than any human could.

Of course, the supercomputer's definition of serving humanity quickly diverges from that of its creators. Forbin's attempts to outsmart and regain control of his creation are systematically thwarted by the machine's superior intellect. This cautionary tale sets the stage for the real-world questions we face with the latest AI developments.

The 700 Million User Beta Test

Enter GPT-5, the newest version of the AI tool ChatGPT, which has captivated 700 million weekly users. At this immense scale, ChatGPT and its iterations are functioning as a public utility but without any of the safeguards, reliability, or accountability we would expect from one.

While regulators, particularly in the European Union, are moving faster than they have with previous technologies, they are still struggling to keep pace. The rest of the world lags even further behind, meaning fundamental principles like fairness and accountability are largely absent. This makes sense when you realize that these 700 million users are participating in a massive, ongoing live test. Mass adoption has far outstripped any form of structured oversight.

This reality should give Sam Altman, the founder of OpenAI, a reason to pause. His product is iterating so rapidly that his faith in the process should be tempered with caution.

When a tool can shape the opinions of and actions of hundreds of millions of people, it’s best to not have it unleashed complete with a heap of glitches.

Innovation Outpacing Oversight

Altman’s own hype hasn’t helped the situation. His early description of GPT-5 as offering a PhD level of insight proved problematic when users quickly discovered basic flaws in its spelling and geography. While Altman has since walked back those claims, his underlying belief in the vision remains.

The standard excuse for these early flaws, as with previous versions, is that this is simply how AI learns and the system needs time to adapt. This explanation would be acceptable if the testing phase were contained. Instead, the tool is already being widely used in schools, businesses, and public services, where it has become normalized as a sounding board for ideas.

With a polished product, this might be acceptable. But with an early version like GPT-5, which clearly has significant issues to resolve, its widespread use can needlessly and negatively impact how we all work and live.

Why AI Needs Guardrails Like a Public Utility

The adoption of generative AI has been faster than any other major technology in recent memory. Personal computers became commercially viable in the 1970s but didn't reach this level of market penetration until the 21st century. Smartphones took 15 years to hit 700 million users, and even Facebook needed seven years.

ChatGPT achieved this in just two years. Cultural acceptance is happening far too quickly for risk assessment to keep up.

There was no apparent need to rush GPT-5 to the public. OpenAI's existing models were already more than enough to keep the public engaged. There was time to breathe, think, and consider the potential for errors before rolling out the next version on a mass scale. In short, there was time to get the product right.

The Real-World Risks of a Rushed Release

This raises the question of who is truly in charge. While OpenAI is not the only player in the generative AI space, it is the one that has captured the public's imagination. The power and vision for this technology are concentrated in one company that regulators have yet to fully understand.

Regulation isn't meant to slow innovation; it exists to provide clear guardrails, as we see with our water, electricity, and professional services. Once a technology reaches the level of a public utility, it requires greater oversight to ensure its provision is safe and cohesive.

Most people—governments, businesses, and citizens alike—want generative AI to succeed. There is clear support for a well-designed tool that can manage mundane tasks and make our lives easier. However, GPT-5, at least in its initial released form, is not yet that tool.

The concern isn't that GPT-5 will be handed the nuclear codes. The real worry is that Sam Altman will promote a version he believes is capable of handling critical societal functions to world leaders before he, or anyone else, truly knows what it will do. When that moment comes, a simple "whoops" will not be enough.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.