Back to all posts

OpenAI Demands Hardware Kill Switches for Devious AI Models

2025-09-13Wayne Williams3 minutes read
AI Safety
Hardware
OpenAI

AI brain coming out of a laptop screen (Image credit: Getty Images / Surasak Suwanmake)

A top executive at OpenAI has issued a stark warning: future AI systems will need safety features, including kill switches, built directly into their hardware. This call to action highlights a growing concern that software-based controls may not be enough to manage the powerful and unpredictable AI of tomorrow.

The Urgent Call for Hardware-Level AI Safety

During a keynote at the AI Infra Summit in Santa Clara, Richard Ho, OpenAI's head of hardware, stressed that safety measures must be integrated at the silicon level. He argued that current safety protocols largely operate in software, which dangerously assumes that the underlying hardware is secure and will always follow instructions.

“It has to be built into the hardware,” Ho stated. “I am not saying that we can’t pull the plug on that hardware, but I am telling you that these things are devious, the models are really devious, and so as a hardware guy, I want to make sure of that.”

This perspective suggests that to truly control advanced AI, we need the ability to physically stop it, a fail-safe that software alone cannot guarantee. Proposed measures include real-time kill switches in AI clusters, advanced telemetry to monitor for abnormal behavior, and secure execution paths built into CPUs and accelerators.

Rethinking Infrastructure for Long-Lived AI Agents

The rapid evolution of generative AI is compelling a fundamental redesign of system architecture. Ho explained that future AI agents will be long-lived, continuously operating in the background and interacting with each other even when a user isn't directly engaged. This vision of persistent AI requires a new kind of infrastructure that is rich in memory and offers extremely low latency to manage ongoing sessions.

Networking, in particular, is emerging as a major bottleneck. “We’re going to have to have real-time tools in these – meaning that these agents communicate with each other,” Ho explained. “Some of them might be looking at a tool, some might be doing a website search. Others are thinking, and others need to talk to each other.”

Overcoming Critical Hardware Bottlenecks

To support this next generation of AI, the industry must overcome several significant hardware challenges. Ho detailed a list of critical hurdles, including the physical limits of high-bandwidth memory, the need for advanced 2.5D and 3D chip integration, and major progress in optical networking.

Perhaps most daunting are the extreme power requirements. Ho projected that future AI data center racks could consume as much as 1 megawatt each, a massive increase that demands new solutions in power delivery and cooling.

A Path Forward Through Industry Collaboration

Ho concluded his address by calling for a collaborative effort across the industry to build a more reliable and trustworthy AI ecosystem. He emphasized the urgent need for new benchmarks specifically designed for these agent-aware architectures to properly measure latency, efficiency, and power.

Furthermore, he advocated for observability to be treated as a core hardware feature, not just a debugging tool, allowing for constant monitoring. “Networking is a real important thing, and as we head towards optical, it is unclear that the reliability of the network is there today,” he said, urging for more rigorous testing of new communication technologies.

This strategy was detailed further in a report from The Next Platform.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.