AI In News Navigating Pitfalls Post ChatGPT
The AI Blunder That Sounded a Familiar Alarm
An inaccurate AI-produced reading list recently published by two newspapers starkly demonstrates how easily publishers can still inadvertently circulate low-quality, AI-generated content, often dubbed "AI slop."
The Chicago Sun-Times and the Philadelphia Inquirer recently featured a summer reading insert from King Features, a Hearst Newspapers subsidiary supplying licensed content. This insert, while listing real authors, recommended mostly non-existent books. An investigation by 404 Media revealed that a human writer used ChatGPT to create the list but neglected to verify its accuracy.
"I do use AI for background at times but always check out the material first," the writer of the insert admitted to 404 Media. "This time, I did not and I can’t believe I missed it because it’s so obvious. No excuses."
The Lingering Shadow of AI Inaccuracies
The launch of OpenAI’s ChatGPT over two years ago ignited an AI gold rush, flooding the market with tools designed to simplify online information retrieval. However, this convenience isn't without its drawbacks, as AI chatbots persistently generate incorrect or speculative answers.
Navigating AI Adoption Challenges in Newsrooms
While many prominent news organizations have established AI guidelines since ChatGPT's emergence, the scale of their operations and numerous external partnerships make it challenging to pinpoint potential sources of embarrassing AI mistakes.
The reading list incident is a case in point, illustrating the various ways AI-driven errors can creep into news products. Tracy Brown, chief partnerships officer for Chicago Public Media (parent company of the Sun-Times), mentioned to CNN that most supplements in the Sun-Times this year, including puzzles and guides, came from Hearst. Brown emphasized that regardless of the content format, newsrooms must approach AI use with caution.
"It’s not that we’re saying that you can’t use any AI," Brown stated. "You have to use it responsibly and you have to do it in a way that keeps your editorial standards and integrity intact."
Verify Everything The Unwavering Need for Human Oversight
Given AI's susceptibility to errors, it's crucial for newsrooms to uphold the "fundamental standards and values that have long guided their work," Peter Adams, senior vice president of research and design at the News Literacy Project, explained to CNN. This commitment includes transparency regarding the use of AI.
Numerous leading publishers have openly discussed their use of AI to support reporting. The Associated Press, widely regarded as a benchmark for journalistic practices, utilizes AI for tasks like translation, summaries, and headlines but consistently incorporates a human review process to prevent errors. Amanda Barrett, AP’s vice president of standards, informed CNN that any data sourced via AI tools is treated as unvetted material, with reporters bearing the responsibility for its verification.
Furthermore, the AP ensures its third-party partners adhere to comparable AI policies.
"It’s really about making sure that your standards are compatible with the partner you’re working with and that everyone’s clear on what the standard is," Barrett commented.
Zack Kass, an AI consultant and former go-to-market lead at OpenAI, concurred with Barrett. He advised CNN that newsrooms should view AI as "like a junior researcher with unlimited energy and zero credibility." Consequently, AI-generated text must undergo "the same scrutiny as a hot tip from an unvetted source."
"The mistake is using it like it’s a search engine instead of what it really is: an improviser with a genius-level memory and no instinct for truth," Kass added.
AI Errors Few but Potentially Damaging
Despite these concerns, Felix Simon, a research fellow in AI and digital news at the University of Oxford’s Reuters Institute for the Study of Journalism, noted that "the really egregious cases have been few and far between."
Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania, told CNN that recent research advancements have diminished AI "hallucinations" (false answers), by enabling chatbots to dedicate more processing time before generating a response. Nevertheless, AI systems are not foolproof, which explains why such incidents continue to happen.
"AI companies need to do a better job communicating to users about the potential for errors, since we have repeatedly seen examples of users misunderstanding how to use technology," Callison-Burch remarked.
Tracy Brown affirmed that all editorial content at the Sun-Times is human-produced. Moving forward, the newspaper intends to ensure that its editorial partners, such as King Features, maintain these standards, mirroring how the newspaper already aligns freelancers' codes of ethics with its own.
Beyond Cleanup The Irreplaceable Human Element in News
However, the "real takeaway," as Zack Kass highlighted, isn't merely that humans are necessary, but rather "why we’re needed."
"Not to clean up after AI, but to do the things AI fundamentally can’t," he explained. This includes the ability to "make moral calls, challenge power, understand nuance and decide what actually matters."