AI Chatbots and Colon Cancer What Patients Should Know
The artificial intelligence tool ChatGPT is emerging as a helpful resource for answering general questions about colon cancer, such as symptoms, screening, and prevention. However, a new study reveals it may not be reliable for the most current information on diagnosis and treatment. These findings highlight both the potential of AI chatbots to improve patient education in oncology and the ongoing need for in-depth guidance from trusted healthcare professionals.
A study presented at the 2025 ASCO Gastrointestinal Cancers Symposium showed that while ChatGPT's responses often aligned with expert opinion, there were significant gaps. Specifically, experts were less likely to agree with the AI's answers concerning diagnosis and treatment.
“ChatGPT provides highly accurate and relevant responses for patient education regarding colon cancer. It's a promising tool, particularly in explaining general information; however, it has limitations in nuanced areas, like diagnosis and treatment,” stated lead author Dr. Sujata Ojha, an internal medicine resident at Dell Medical School at the University of Texas in Austin.
Dr. Ojha also noted, “Younger people who are diagnosed with colon cancer or who are concerned about their colon cancer risk are more likely to use new AI technology to understand the disease further.”
Putting ChatGPT to the Test
As patients increasingly turn to digital sources for health information, researchers sought to determine if AI tools like ChatGPT could provide reliable answers that align with expert medical opinions. The research team compiled 10 comprehensive questions about colon cancer by reviewing frequently asked questions from reputable organizations, including the National Cancer Institute, Mayo Clinic, and the American Cancer Society. These questions were divided into two categories: General Oncology Characteristics (symptoms, screening, prevention) and Diagnosis and Treatment.
The questions were then posed to ChatGPT, simulating a typical patient inquiry. The AI-generated responses were evaluated by a panel of oncology experts using a 5-point Likert scale, where 1 indicated “strongly disagree” and 5 meant “strongly agree.”
The Verdict Where AI Shines and Falls Short
The overall average score given by the experts was high, at 4.72 out of 5. However, a closer look revealed a key difference between the categories. Answers related to “general oncology characteristics” received a near-perfect score of 4.92. In contrast, answers on “diagnosis and treatment” scored significantly lower at 4.52.
Dr. Ojha explained that the lower scores for diagnosis and treatment are likely because “new advances are made daily, making it difficult for AI to stay current.” Furthermore, colon cancer treatment is highly personalized and depends on patient-specific factors, such as genetic mutations, which AI cannot account for.
The study suggests that while AI can supplement patient education, further research is needed to understand its limitations and improve its clinical utility.
The Future of AI in Patient Education
Researchers plan to expand their study to assess ChatGPT's effectiveness in other areas like gastroenterology and colorectal surgery. They also aim to identify the specific reasons for expert disagreement on treatment and diagnosis to help improve the AI's future responses.
Dr. Laura Vater, an ASCO expert from the Indiana University Simon Cancer Center, commented on the findings: “This research demonstrates the ability of AI to enhance patient engagement, support informed decision-making, and potentially address disparities in healthcare accessibility. However, it also highlights the limitations of AI, especially in nuanced topics like diagnosis and treatment.”