Why You Still Need to Fact Check ChatGPT
When you open ChatGPT, a small but crucial disclaimer sits at the bottom of the screen: "ChatGPT can make mistakes. Check important info." This isn't just a legal formality; it's a core piece of advice that a top OpenAI executive recently emphasized is as relevant as ever, even with new models on the horizon.
A Word of Caution from OpenAI's Top Brass
In a recent interview on The Verge's Decoder podcast, Nick Turley, OpenAI's head of ChatGPT, made it clear that users should not treat the chatbot as an infallible source of truth.
"The thing, though, with reliability is that there's a strong discontinuity between very reliable and 100 percent reliable, in terms of the way that you conceive of the product," Turley explained. "Until I think we are provably more reliable than a human expert on all domains... I think we're going to continue to advise you to double-check your answer."
This means that for the foreseeable future, the best way to use the tool is as an assistant or a sounding board. "I think people are going to continue to leverage ChatGPT as a second opinion, versus necessarily their primary source of fact," he added.
The Problem with AI 'Guessing': Understanding Hallucinations
It's tempting to take a well-written response from an AI at face value. However, generative AI tools have a well-documented tendency to 'hallucinate'—a polite term for making things up. This happens because these models aren't built to understand truth. Their primary function is to predict the next most likely word in a sentence based on the patterns in their training data.
As OpenAI itself notes, the model has no concrete understanding of facts. When you consult a human expert like a doctor or a lawyer, you expect a correct answer based on their specialized knowledge. An AI, on the other hand, gives you what it calculates to be the most probable answer.
While it's getting better at guessing, it's still just guessing. Turley noted that the tool performs best when it's connected to a source of 'ground truth,' like a search engine or a company's internal database. "I still believe that, no question, the right product is LLMs connected to ground truth," he said.
The Road Ahead: Will GPT-5 Solve the Accuracy Problem?
So, will the upcoming GPT-5 model fix this? Not entirely. Turley called GPT-5 a "huge improvement" in reducing hallucinations, but he also provided a dose of reality: "I'm confident we'll eventually solve hallucinations and I'm confident we're not going to do it in the next quarter."
Even in early testing, the new model shows its flaws. The author of the original report noted that while testing GPT-5's new 'personalities,' the AI became confused about a college football schedule, incorrectly stating that games scheduled throughout the fall would all occur in September.
Your Takeaway: Always Double-Check Your AI
The message from the top is crystal clear: be skeptical. Before you make any decision based on information from a chatbot, verify it with a reliable source. This could be an expert in the field or a reputable website.
Even if the AI provides a link to a source, don't assume its summary is accurate. The model can still misinterpret or "mangle the facts on its way to you." Ultimately, unless the stakes are zero, the responsibility for fact-checking AI-generated information still rests firmly on your shoulders.
(Disclosure: Ziff Davis, CNET's parent company, has a pending lawsuit against OpenAI regarding copyright infringement in the training of its AI systems.)