Back to all posts

OpenAI AGI Dreams Cancer Cures and Harsh Realities

2025-05-21Brittany Trang6 minutes read
AI
Healthcare
OpenAI

The term "artificial intelligence" once met with eye-rolls, gained mainstream attention largely thanks to OpenAI’s ChatGPT. But behind the public fascination lies a complex story of ambition, data acquisition, and promises that demand scrutiny, especially in the critical field of healthcare. Karen Hao, a former journalist at MIT Technology Review and the Wall Street Journal, delves into this in her new book, "Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI," offering a chilling play-by-play of OpenAI's journey towards Artificial General Intelligence (AGI) – an AI matching human intelligence.

OpenAI's Data Dilemma The Thirst for AGI

Hao's book reveals the escalating data hunger that fueled OpenAI's models. GPT-2 was trained on a curated set of quality articles. For GPT-3, the net widened to include a broader range of articles, English-language Wikipedia, and a repository of likely pirated books and scholarly articles. Still needing more, developers turned to the Common Crawl, a scrape of the entire internet, initially with a filter for low-quality content. By GPT-4, Hao writes, OpenAI was so desperate for data that it dropped this filter and began scraping YouTube video transcripts.

This relentless pursuit of data has now turned towards direct user interaction. Hao strongly advises against uploading personal health data to ChatGPT, linking this to OpenAI's efforts to maximize user engagement. She points to a recent incident where an update made ChatGPT overly affirming, even to harmful expressions, before being rolled back. "They’re basically trying to just create a tap straight into the source," Hao stated. "What’s better than scraping the internet? It’s literally just pulling it directly from you." This, she argues, is why ChatGPT offers a free tier – direct user data provides a competitive advantage, as conversations with ChatGPT train its models unless users opt out.

The Grand Promise of AGI Curing Cancer and Other Miracles

OpenAI, like many AGI developers, has long touted the potential of AGI to revolutionize society, promising, among other things, greater access to healthcare and even cures for diseases like cancer. Hao notes this has become dogma: the idea that this yet-undefined technology will magically improve global health, cure cancer, and make psychotherapy cheap and effective.

While OpenAI signals advancements in healthcare through benchmarks and partnerships, Hao remains skeptical. "The fact of the matter is, [OpenAI has] been around for almost a decade now, and they haven’t actually made any substantive steps in giving more accessible health care to people," she observes. Their chatbots are known for hallucinations and spreading medical misinformation, a stark contrast to other AI applications that have demonstrated effective uses in the healthcare industry. As highlighted in previous STAT News reporting, questions persist about why OpenAI's technology, not initially built for healthcare, is already finding its way into sensitive medical environments.

Separating AI Hype from Healthcare Reality

How can we distinguish genuine potential from inflated promises? Hao suggests two approaches. First, examine the track record. "AGI will solve health care once AGI is built… but that’s not really how the technology works," she argues. If OpenAI genuinely cared about these issues, they would have leveraged existing AI capabilities to make progress already. Their inaction is a strong signal that these grand health goals are not their primary focus.

Second, listen to independent experts. Hao points to a New York Times article by Cade Metz headlined “Why We’re Unlikely To Get AGI Anytime Soon.” It cites a survey where over three-quarters of AI researchers stated that current methods are unlikely to lead to AGI. "The AI discourse today," Hao says, "is so polluted by people who are financially motivated... to pretend AGI is well-defined, that AGI is right around the corner." The promise of curing cancer is particularly potent, preying on universal hopes and fears. "That’s why these companies wheel that promise out again and again because they know that is the thing that people want so badly that they are willing to suspend their disbelief."

Beyond the Hype Targeted AI's Untapped Potential

The problem, Hao contends, is the lack of evidence that generative AI models like ChatGPT are leading us toward these health breakthroughs. In contrast, she highlights that "plenty of evidence that we can have some impact on just generally earlier cancer detection, also better drug discovery, with totally different types of AI models." Task-specific, smaller deep learning models have made significant strides in detecting diseases earlier and assisting medical professionals. This work, much of which predates ChatGPT, is now being overlooked and under-invested in as Silicon Valley captures public imagination and funding with large language models that lack a proven healthcare track record.

What Does AI Designed Drug Truly Mean

The term "AI-designed drug" itself warrants scrutiny. Recently, AI drug development startup Absci announced its first candidate, ABS-101 for inflammatory bowel disease, entered clinical trials, with some media reports claiming it was designed "from scratch" with AI. Absci's patent application shows AI was used to optimize one and de novo design two of the six binding areas (CDRs) on an antibody.

Screenshot from Absci patent application cover page

However, Sarel Fleishman, an expert in computational protein design, noted that the AI-designed areas (L1 and L2) are not often involved in antigen binding and are structurally similar across antibodies, suggesting limited impact. Earlier STAT News investigations found that companies like Absci have not fully documented their ability to develop antibodies entirely de novo with AI.

Further tempering expectations, results from the "AIntibody" competition, which tested AI's ability to generate antibodies, showed that even the best AI-generated candidates didn't match those from experimental techniques. Specifica CSO Andrew Bradbury concluded, "AI performed better than I was expecting, but it wasn’t performing as well as it was hyped up to be."

AI Adoption in Healthcare Navigating Immature Tools and ROI

Beyond specific applications, broader AI adoption in healthcare faces hurdles. A study published in the Journal of the American Medical Informatics Association found that healthcare leaders in the Scottsdale Institute identified "immature tools" as the top barrier to AI adoption. While tools for imaging and sepsis detection are popular, success with diagnostic tools is low, indicating a need for better alignment with clinical needs.

Concerns about legal liability and transparency also slow AI implementation in health systems.

Furthermore, the tangible return on investment (ROI) for AI in healthcare remains elusive. Irene Chen, a UC Berkeley professor, summarizing takeaways from the SAIL conference (Symposium on Artificial Intelligence in Learning Health Systems), noted a lack of hard ROI for tools like ambient scribes. Find her de-identified quotes here. In an era of squeezed hospital budgets, AI investments must prove their worth. As Chen summarized one speaker's point, "AI needs to demonstrate that it’s worth 10 nurses." This highlights the practical and economic challenges that lie beneath the surface of AI's transformative promise in medicine.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.