AI Revolutionizes Patient Screening For Clinical Trials
A groundbreaking new study published in Machine Learning: Health demonstrates how AI, specifically ChatGPT, can dramatically speed up the patient screening process for clinical trials. This development holds significant promise for reducing costly delays and boosting the success rates of vital medical research.
The High Cost of Slow Clinical Trials
Clinical trials are the backbone of medical innovation, serving as the final testing ground for new drugs and treatments before they become available to the public. However, many of these trials face a critical bottleneck: enrolling a sufficient number of participants. A recent analysis found that as many as 20% of trials affiliated with the National Cancer Institute (NCI) fail simply due to low enrollment. This not only wastes money and time but also compromises the scientific reliability of new therapies.
The current screening process is a significant contributor to this problem. It is a slow, manual task where researchers meticulously comb through individual patient medical records, a process that can take about 40 minutes per patient. With limited staff and resources, this pace is often too slow to meet the demand.
Adding to the challenge, crucial patient information is often buried in unstructured text within electronic health records (EHRs), such as doctors' narrative notes. Traditional software struggles to parse this data, meaning many eligible patients are overlooked, ultimately slowing down the entire pipeline for new medical treatments.
Can AI Provide a Solution?
To address this challenge, researchers at UT Southwestern Medical Centre explored using ChatGPT to automate and accelerate the screening process. In their study, they tasked GPT-3.5 and GPT-4 with analyzing data from 74 patients to determine their eligibility for a head and neck cancer trial.
Putting ChatGPT to the Test
The team experimented with three different methods for prompting the AI to ensure a comprehensive evaluation:
- Structured Output (SO): The AI was asked to provide answers in a specific, predetermined format.
- Chain of Thought (CoT): The model was prompted to outline its step-by-step reasoning process.
- Self-Discover (SD): The AI was given the freedom to independently determine the most important information to look for.
Promising Results: Speed, Cost, and Accuracy
The results were overwhelmingly positive. Both AI models significantly outperformed the manual process. Screening times per patient dropped to a range of just 1.4 to 12.4 minutes, with a minimal cost of $0.02 to $0.27 per screening.
While both versions were effective, GPT-4 proved to be more accurate than its predecessor, GPT-3.5, though it was slightly slower and more expensive to run.
LLMs like GPT-4 can help screen patients for clinical trials, especially when using flexible criteria. They're not perfect, especially when all rules must be met, but they can save time and support human reviewers.
Dr. Mike Dohopolski, lead author of the study
The Future of AI in Medical Research
This research is a powerful illustration of how AI can streamline clinical trials, potentially bringing new and effective treatments to patients much sooner. The original study is one of the first to be published in IOP Publishing's Machine Learning series™, a new journal dedicated to AI and ML applications in science.
The same research team is also pioneering other uses for AI in medicine. They have developed a deep learning system called GeoDL that provides surgeons with 3D radiation dose estimates from CT scans in just 35 milliseconds, enabling real-time adjustments to a patient's radiation therapy.