Back to all posts

Doctors Urge Caution With AI Health Advice

2025-07-12CBC4 minutes read
Artificial Intelligence
Healthcare
Medical Technology

While artificial intelligence tools like ChatGPT can sometimes provide accurate answers to patient questions, Canadian medical researchers are issuing a strong warning: always verify this information before taking any action.

The Rise of AI as a Medical Advisor

This caution was a key topic at a recent media briefing hosted by the Ontario Medical Association (OMA), which focused on the growing trend of patients using DIY information sources, from search engines to chatbots. The concern is that patients are increasingly turning to AI for medical guidance.

Dr. Valerie Primeau, a psychiatrist from North Bay, emphasized the urgency of this issue. "I have patients now that talk to ChatGPT to get advice and have a conversation," she said. Primeau notes that while chatbots can produce convincing and even empathetic responses, the information can often be fake. "If we don't address it now and help people navigate this, they will struggle."

Real-World Risks: When AI Advice Goes Wrong

Dr. David D'Souza, a radiation oncologist in London, Ont., shared a concerning anecdote that highlights the potential dangers. "A patient came to me asking if he should wait to have his cancer that was diagnosed treated in a few years because he believes that AI will customize cancer treatments for patients," D'Souza recounted. "I had to convince him why he should have treatment now."

This example underscores how misinterpretations of AI-generated content can lead patients to consider delaying vital, conventional treatments.

Dr. Zainab Abdurrahman suggests that patients check chatbot responses against what websites of professional medical organization websites say.

OMA president Dr. Zainab Abdurrahman advises patients to be skeptical. If you see a post claiming "doctors have been hiding this from you," she suggests cross-referencing the information with official websites of specialist groups, like provincial cancer care associations. Abdurrahman also warned about fake ads and AI-generated images designed to mislead patients.

The Problem of Lost Nuance in AI Summaries

Even when not entirely fake, AI-generated answers can be dangerously incomplete. Today's chatbots are known to present false information that appears authoritative. In one study from Western University, researchers fed thousands of medical literature summaries into AI models. They discovered that three-quarters of the AI-generated summaries omitted crucial details.

Dr. Benjamin Chin-Yee, a hematologist and co-author of the study, explained that a journal article might specify a drug is effective only for a certain patient group, but the AI summary would leave that critical detail out. "The worry is that when that nuance in detail is lost, it can be misleading to practitioners who are trying to use that knowledge to impact their clinical practice," he said.

Generative AI technologies like chatbots are based on pattern matching technologies that give the most likely output to a given question.

Surprising Empathy but Alarming Inaccuracy

In a separate study, David Chen, a medical student at the University of Toronto, compared chatbot responses to those of oncologists for 200 cancer-related questions from a Reddit forum. "We were surprised to find that these chatbots were able to perform to near-human expert levels of competency based on our physician team's assessment of quality, empathy and readability," Chen noted.

However, he stressed that these experimental results may not reflect real-world reliability. "Without medical oversight, it's hard to 100 per cent trust some of these outputs," Chen said, highlighting unresolved concerns about privacy and patient trust.

A significant issue is that chatbots can hallucinate—producing outputs that sound correct but are actually fabricated. Chen pointed out that studies show hallucination rates can be as high as 20%, making the information "clinically erroneous."

Expert Advice: How to Use AI Health Tools Safely

Cardiologist Dr. Eric Topol advises consulting multiple chatbots and verifying sources.

Dr. Eric Topol, a cardiologist and author, agrees that the impact of AI on health and longevity is still an emerging field. "It hasn't been systematically assessed in a meaningful way for public use," he said.

For those who use these tools, Topol advises consulting multiple chatbots to compare responses and asking for citations from medical literature, while also remembering to verify that the citations are real. The ultimate message from all experts is clear: AI can be a starting point for research, but it is no substitute for the nuanced, personalized, and reliable advice of a medical professional. "It's a different world now and you can't go back in time," Topol concluded, stressing the importance of using these powerful new tools wisely.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.