Back to all posts

Sam Altman On AI Advances And Societal Adaptation

2025-06-19Sarah Perkel5 minutes read
AI
Sam Altman
Societal Impact

Sam Altman, the prominent CEO of OpenAI, recently shared his insights on the evolution of artificial intelligence, acknowledging a surprising divergence between AI's technical advancements and society's adaptation to them.

OpenAI CEO Sam Altman speaking at an event with SoftBank Group CEO Masayoshi Son in Tokyo, Japan. Sam Altman notes OpenAI's accuracy in "technical predictions" for AI, contrasted with unexpected human reactions. Photo Credit: Tomohiro Ohsumi via Getty Images

OpenAI's Technical Prowess vs Societal Reality

Speaking on a recent episode of the "Uncapped with Jack Altman" podcast, Sam Altman remarked on OpenAI's success in forecasting AI's technical development. "I feel like we've been very right on the technical predictions," he stated. However, he admitted his expectations for societal change haven't quite matched the reality. "Then I somehow thought society would feel more different if we actually delivered on them than it does so far," Altman mused, adding, "But I don't even — it's not even obvious that that's a bad thing."

Altman highlighted that OpenAI believes it has effectively "cracked" reasoning capabilities within its models. He pointed to their o3 large language model as an example, suggesting it performs on par with a human PhD in many specialized areas. Despite this technological trajectory largely aligning with expectations, Altman observed that public and societal reactions have been less pronounced than he anticipated.

The Underwhelming Reaction to Advanced AI

"The models can now do the kind of reasoning in a particular domain you'd expect a Ph.D. in that field to be able to do," Altman explained. He expressed a sense of bewilderment at the relatively muted reception to these milestones. "In some sense we're like, 'Oh, OK, the AIs are like a top competitive programmer in the world now,' or 'AIs can get like a top score on the world's hardest math competitions,' or 'AIs can like, you know, do problems that I'd expect an expert Ph.D. in my field to do,' and we're, like, not that impressed. It's crazy."

While AI adoption is growing and already impacting businesses—with some companies using AI tools to supplement or even replace human workers—Altman believes society hasn't undergone the significant transformations one might expect. He feels the overall response to the technology has been somewhat underwhelming when measured against its vast potential.

"If I told you in 2020, 'We're going to make something like ChatGPT, and it's going to be as smart as a Ph.D. student in most areas, and we're going to deploy it, and a significant fraction of the world is going to use it and kind of use it a lot,'" he speculated. "Maybe you would have believed that, maybe you wouldn't have. But conditioned on that, I bet you would say, 'OK, if that happens, the world looks way more different than it does right now.'"

AI as a Copilot and Its Future Autonomy

Currently, Altman views AI as being most effective in a "copilot" role. However, he foresees the potential for major societal shifts if AI systems achieve true autonomy, particularly in fields like scientific research.

"You already hear scientists who say they're faster with AI," he noted. "Like, we don't have AI maybe autonomously doing science, but if a human scientist is three times as productive using o3, that's still a pretty big deal. And then, as that keeps going, and the AI can, like, autonomously do some science, figure out novel physics."

When addressing the risks associated with AI, Altman conveyed a less alarmed stance than some of his contemporaries in the field, such as Dario Amodei of Anthropic and Demis Hassabis of DeepMind, who have voiced concerns about future catastrophic scenarios.

"I don't know about way riskier. I think like, the ability to make a bioweapon or like, take down a country's whole grid — you can do quite damaging things without physical stuff," Altman acknowledged. He also touched upon more "silly" yet pertinent risks: "Like, I would be afraid to have a humanoid robot walking around my house that might fall on my baby, unless I, like, really, really trusted it."

The Uncharted Territory of Future Societal Impact

For the time being, Altman observes that life continues with relative normalcy. Yet, he anticipates that the impact of AI will eventually snowball, though he admits to uncertainty about the ultimate societal landscape.

"I think we will get to extremely smart and capable models — capable of discovering important new ideas, capable of automating huge amounts of work," he affirmed. "But then I feel totally confused about what society looks like if that happens." This leads him to a crucial call to action: "So I'm like most interested in the capabilities questions, but I feel like maybe at this point more people should be talking about, like, how do we make sure society gets the value out of this?"

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.