ChatGPT Diet Tip Leads To Dangerous Poisoning
In a startling case that highlights the potential dangers of artificial intelligence, a man in the United States was hospitalized with life-threatening poisoning after following dietary advice from ChatGPT.
An Unprecedented Case of AI-Induced Poisoning
Doctors from the University of Washington reported what they believe to be the first known case of bromide poisoning linked directly to AI-generated advice. The details were published in Annals of Internal Medicine: Clinical Cases. The incident serves as a critical warning about the risks of using large language models for medical guidance without professional oversight.
From Salt Substitute to Psychotic Episode
The patient had asked ChatGPT for a safe alternative to table salt (sodium chloride). The AI reportedly suggested sodium bromide, failing to mention its severe toxicity to humans. Trusting the advice, the man consumed the compound for three months.
He first arrived at an emergency room with severe paranoia, believing his neighbor was trying to poison him. Despite having some normal vital signs, he exhibited concerning symptoms, including hallucinations and a refusal to drink water even though he was thirsty. His condition deteriorated rapidly into a full psychotic episode, compelling doctors to place him on an involuntary psychiatric hold for his own safety.
Diagnosing Bromism A Forgotten Condition
After he was stabilized with intravenous fluids and antipsychotic medication, the man was able to explain the situation to his doctors. He revealed his reliance on ChatGPT for the dietary suggestion. This led to the diagnosis of bromism, or bromide poisoning.
Bromide compounds were used medicinally in the past for conditions like anxiety and insomnia. However, they were banned for human use decades ago due to their high potential for causing severe health problems. Today, bromism is an extremely rare diagnosis in humans, with bromide primarily found in some veterinary and industrial products.
The AI's Dangerous Lack of Context
While the doctors did not have access to the man's original chat history, they tested the premise themselves. When they asked ChatGPT the same question about salt alternatives, it once again recommended bromide without any warnings about its unsuitability or danger for human consumption. This demonstrates a significant flaw where AI can present factual information—bromide is indeed a salt—without the crucial context of health and safety.
A Stark Warning on AI and Medical Advice
Fortunately, the man made a full recovery after a three-week hospital stay. However, his case is a powerful cautionary tale. Experts stress that while AI tools can make vast amounts of information accessible, they should never be a substitute for advice from a qualified medical professional. As this incident proves, AI can provide dangerously incorrect guidance with potentially fatal consequences.