Developer Offer
Try ImaginePro API with 50 Free Credits
Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.
The Man Who Can Halt The Next ChatGPT Release
In the high-stakes world of artificial intelligence, where innovation often outpaces caution, one university professor now holds what might be one of the most critical roles in the entire tech industry.
If you're concerned about the potential risks AI poses to humanity, from societal disruption to existential threats, then Zico Kolter of Carnegie Mellon University is a name you should know. He leads a small, four-person panel at OpenAI with the unprecedented authority to halt the release of new AI systems, including future versions of ChatGPT, if they are deemed unsafe.
A New Era of AI Oversight
Kolter's position as chair of OpenAI's Safety and Security Committee isn't new, but it recently gained significant new weight. The heightened importance comes from landmark agreements with regulators in California and Delaware. These agreements were essential for allowing OpenAI to form a new business structure, making it easier to raise capital while committing to its original safety-focused mission.
Safety has always been a stated priority for OpenAI, but the company has faced criticism for allegedly prioritizing commercial success over caution, a tension that became public during the temporary ouster of CEO Sam Altman in 2023. These new regulatory commitments aim to formally place safety and security decisions above financial considerations. Kolter's role is central to this new structure, making him a key figure in ensuring OpenAI adheres to its promises.
The Power of the Panel
The authority of Kolter's committee is substantial. "We have the ability to do things like request delays of model releases until certain mitigations are met," Kolter explained. While he declined to comment on whether this power has been used, its existence serves as a powerful check on the company's development pipeline.
To ensure independence, CEO Sam Altman stepped down from the panel last year. The committee includes other notable figures like former U.S. Army General Paul Nakasone, who previously led the U.S. Cyber Command. Though Kolter won't sit on the for-profit board, he has been granted "full observation rights" to attend all its meetings, giving him complete insight into decisions related to AI safety.
Defining AI Dangers From Existential to Everyday
The committee's definition of "unsafe" is deliberately broad. It isn't just focused on science-fiction scenarios where AI could be used to create bioweapons or launch devastating cyberattacks. The concerns are far more immediate and wide-ranging.
"Very much we’re not just talking about existential concerns here," Kolter stated. "We’re talking about the entire swath of safety and security issues and critical topics that come up when we start talking about these very widely used AI systems."
This includes cybersecurity risks, such as an AI agent accidentally leaking sensitive data, as well as the profound impact these models can have on individuals. "The impact to people’s mental health, the effects of people interacting with these models and what that can cause. All of these things, I think, need to be addressed from a safety standpoint," he added.
A Veteran's Perspective on an AI Explosion
Kolter, 42, is no newcomer to the field. He began studying machine learning in the early 2000s, long before it became a global phenomenon. "When I started working in machine learning, this was an esoteric, niche area," he recalled. "We called it machine learning because no one wanted to use the term AI because AI was this old-time field that had overpromised and underdelivered."
Despite his deep expertise and long history with the field, he admits the recent explosion in AI capabilities and the associated risks caught even insiders by surprise. This perspective gives him a unique understanding of both the potential and the perils of the technology he is now tasked with overseeing.
Cautious Optimism from Critics
AI safety advocates are watching these developments closely. Nathan Calvin, general counsel at the AI policy nonprofit Encode and a critic of OpenAI, expressed cautious optimism. He believes Kolter has the right background for the role and that the new commitments could be a "really big deal if the board members take them seriously."
However, the question remains whether these are just "words on paper" or a genuine shift in corporate governance. As Calvin noted, "I think we don’t know which one of those we’re in yet." The tech world will be watching to see if Zico Kolter and his committee can truly steer the world's leading AI company toward a safer future.
Compare Plans & Pricing
Find the plan that matches your workload and unlock full access to ImaginePro.
| Plan | Price | Highlights |
|---|---|---|
| Standard | $8 / month |
|
| Premium | $20 / month |
|
Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.
View All Pricing Details

