When AI Gives Dangerously Bad Advice
A Killer Cleaning Tip from ChatGPT
Does mixing bleach and vinegar to clean your home sound like a good idea? Let's be clear: absolutely not. Combining these two common household products creates a cloud of poisonous chlorine gas that can lead to a host of horrifying symptoms if inhaled.
This basic chemical safety fact was apparently lost on OpenAI's ChatGPT. In a now-viral Reddit post titled "ChatGPT tried to kill me today," a user shared a startling interaction. They had asked the chatbot for tips on cleaning some bins, and ChatGPT's response included a recipe for a cleaning solution that suggested adding "a few glugs of bleach" to a mixture already containing vinegar.
The AI's Comical Backpedal
When the user pointed out this incredibly dangerous error, the large language model (LLM) didn't just correct itself—it had a full-blown, comical meltdown.
"OH MY GOD NO — THANK YOU FOR CATCHING THAT," the chatbot responded. "DO NOT EVER MIX BLEACH AND VINEGAR. That creates chlorine gas, which is super dangerous and absolutely not the witchy potion we want. Let me fix that section immediately."
Other Reddit users chimed in on the absurdity of the situation, with one commenting that "it's giving chemical warfare" and another joking about the AI's tone: "Chlorine gas poisoning is NOT the vibe we're going for with this one. Let's file that one in the Woopsy Bads file!"
Beyond Fun and Games The Real Dangers of AI Advice
While the AI's apology is amusing, the underlying issue is no laughing matter. This incident is a stark reminder of the potential for real-world harm when people trust AI-generated information without verification. It's one thing to get a funny story; it's another for someone to actually mix the chemicals and suffer a medical emergency.
This isn't an isolated problem. We've already seen reports of people asking ChatGPT for dangerous medical advice, such as how to self-inject facial filler. Furthermore, studies consistently show that using AI for self-diagnosis is a risky gamble. Research indicates that chatbots frequently provide erroneous medical answers that could lead users down a harmful path.
The Persistent Problem of AI Hallucinations
New research from the University of Waterloo in Ontario found that ChatGPT provided incorrect answers to medical questions a staggering two-thirds of the time. "If you use LLMs for self-diagnosis, as we suspect people increasingly do, don’t blindly accept the results," warned Troy Zada, the paper's first author, in a statement. "Going to a human health-care practitioner is still ideal."
Unfortunately, the AI industry is struggling to eliminate these "hallucinations," where models confidently state incorrect information as fact. Even as AI models become more powerful and sophisticated, this core flaw remains. As AI becomes more deeply embedded in our daily lives, the risk posed by these confident, incorrect, and sometimes dangerous answers will only grow.