AI Godfather Shares How ChatGPT Ended His Relationship
A Breakup by Bot A Personal Glimpse into AI Reach
Geoffrey Hinton, a pivotal figure often hailed as the “Godfather of AI,” recently shared a story that brings the influence of artificial intelligence alarmingly close to home. In a surprising personal revelation, Hinton disclosed that a former girlfriend used an AI chatbot to end their relationship.
According to Hinton, she prompted the AI to detail why he had been a “rat,” and the chatbot produced a critique of his behavior which was then passed on to him.
“She got the ChatGPT to explain how awful my behaviour was and gave it to me,” Hinton recounted to the Financial Times. “I didn’t think I had been a rat, so it didn’t make me feel too bad. I met somebody I liked more—you know how it goes.”
While the anecdote might seem humorous, it serves as a powerful illustration of how deeply AI is becoming integrated into our most personal communications and daily lives.
From Anecdote to Serious Warnings
This lighthearted episode stands in stark contrast to Hinton's increasingly grave warnings about the future of AI. Since his departure from Google in 2023, he has become one of the most prominent voices cautioning against the technology's potential dangers. His concerns range from massive job displacement to the existential risk of machines eventually surpassing human intelligence.
“When the assistant is much smarter than you, how are you going to retain that power?” he asked during the interview, highlighting the core dilemma of controlling superintelligence.
AI Could Let Anyone Create Bioweapons
Hinton, who was a Nobel laureate in Physics last year, has voiced specific fears about the democratization of dangerous technologies. He warns that powerful AI could soon give ordinary individuals the capability to create devastating weapons, including biological or even nuclear devices.
“A normal person assisted by AI will soon be able to build bioweapons, and that is terrible,” he stated, painting a grim picture of future security threats.
The Mother and Baby Analogy for Control
When proposing safeguards against rogue superintelligence, Hinton offers a unique and compelling analogy: AI systems should be designed with the protective instincts of a caregiver.
“There is only one example we know of a much more intelligent being controlled by a much less intelligent being, and that is a mother and baby,” he explained. “If babies couldn’t control their mothers, they would die.”
In his view, embedding AI with inherent, deeply ingrained protective behaviors—similar to a mother's instinct to care for her child—could be a key strategy to ensure humanity's survival in a world with superintelligent machines.
Economic Impact Jobs and Inequality
Beyond existential risks, Hinton has also warned of severe economic disruption. He predicts that AI will lead to widespread job losses as it makes human intelligence redundant across many sectors, a shift he compares to the Industrial Revolution. This could lead to an unprecedented concentration of wealth and power within a few corporations and the elite.
Hinton clarifies that the issue isn't the technology itself but the capitalist framework that incentivizes replacing human workers with more efficient machines. To mitigate this, he strongly advocates for systemic changes like a universal basic income (UBI) to ensure the economic benefits of AI are distributed more fairly and human dignity is upheld.
The Race Toward Superintelligence
When asked about the timeline for AI achieving intelligence superior to humans, Hinton provides a sobering estimate agreed upon by many scientists: “between five and 20 years, that’s the best bet.” He warns that once this threshold is crossed, a superintelligent AI could easily outmaneuver humans, making issues of control and safety paramount.
As AI's role in our lives grows, tech companies are beginning to respond. OpenAI, for example, has issued guidelines cautioning against using chatbots for major life decisions. They are actively updating their systems to help users evaluate their choices thoughtfully rather than providing definitive, prescriptive answers, acknowledging the profound challenges of integrating AI into the most sensitive areas of human experience.