AI Health Advice A Risky Diagnosis
Patients are increasingly being advised to approach AI tools like ChatGPT and other forms of artificial intelligence with caution. The primary concern is that the technology's well-written and confident answers can be so persuasive that people mistake them for reliable medical advice.
During the World Health Expo in Dubai, health experts highlighted that large language models (LLMs) are already transforming patient-doctor interactions. It's becoming more common for patients to arrive at appointments with treatment plans or medication lists suggested by AI.
The Persuasive Power of AI
Experts warn that the growing persuasive capability of AI could cause patients to put their faith in machine-generated responses that lack clinical validation. Although ChatGPT has successfully passed parts of the U.S. medical licensing exam, professionals emphasize that answering test questions is vastly different from the real-world practice of medicine, which demands nuanced context, judgment, and accountability.
Rashmi Rao, managing director at Rcubed Ventures, pointed to an MIT study that examined how patients reacted to answers from LLMs versus those from human doctors. "The study basically compared AI-generated results versus doctor-generated results, and the patients, 50 per cent of the time, could not tell whether the response was AI-generated or doctor-generated," she explained.
The most alarming finding, Rao noted, was how highly participants rated the AI's responses, even when they were incorrect. "That’s when it goes down that slippery slope, especially if it’s a patient that is trusting the AI to give its results … but the AI is giving them low-accuracy results. They still trust that more," she said. This highlights the need to hold patient-facing AI tools accountable not just for data privacy, but for their immense persuasive influence.
Why Patients Turn to AI for Help
Tjasa Zajc, a digital health expert and podcast host, shared her personal experience of using ChatGPT for guidance between doctor appointments for her inflammatory bowel disease. "Patients don’t just use AI because it’s a fancy new technology. They use AI out of desperation, because they’re in between visits and when you leave the doctor’s office, your next check-up could be in half a year. There’s no easy access to the clinician," Zajc stated.
She also pointed out a critical nuance in using these tools: LLMs tend to agree with the user's premise. The way a question is phrased can significantly alter the response. For example, asking "how harmful is this drug to me?" implies a belief that it is harmful, which can bias the AI's answer. This is a significant shift in the doctor-patient dynamic, where clinicians may now expect patients to be more informed. While this can encourage patients to take ownership of their health, Zajc cautions, "we need to be cautious as consumers."
A Double-Edged Sword for Patient Empowerment
Despite the risks, AI tools could offer benefits, particularly for patients in low-income countries with limited healthcare access, according to Jon Christensen of KLAS Research. He referenced research from a U.S. safety-net hospital that challenged the idea that digital tools widen inequalities. "Patients in the lowest income demographic, that did not speak English as their first language, they actually found the technology to be the most empowering," he said.
These patients could use the technology to read information at their own pace, write questions, and translate materials, leading to a better understanding of their treatment. Generative AI can build on this by providing instant translations and resources in native languages.
However, patient sentiment remains divided. Surveys show that while about half are comfortable with their health system using AI, 20 percent are opposed. "The discomfort comes with things like security and privacy," Christensen said. "It comes with the closer it gets to diagnosing them. They’re very uncomfortable with that, if the AI is the primary diagnostic tool." Patients want clear assurance that a human doctor remains central to their care, overseeing any diagnosis and treatment plan.