Back to all posts

Google AI Spreads Misinformation And Dangerous Advice

2025-06-08Ariel Zilber4 minutes read
Artificial Intelligence
Google AI
AI Hallucinations

Google AI Overviews Under Scrutiny for Misinformation

Google’s AI Overviews, a feature designed to provide quick answers to search queries, are reportedly generating "hallucinations" or bogus information. This functionality is also drawing criticism for potentially undercutting publishers by diverting users away from traditional website links.

The tech giant, which previously faced scrutiny when its AI image generator produced factually or historically inaccurate images such as female Popes and black Vikings, is now under fire for its AI Overviews. These summaries have been criticized for offering false and sometimes dangerous advice, as reported by The Times of London.

Google’s latest artificial intelligence tool which is designed to give quick answers to search queries is facing criticism. Google CEO Sundar Pichai is pictured. Google’s latest artificial intelligence tool, designed for quick search answers, faces criticism. Google CEO Sundar Pichai is pictured. (Source: AFP via Getty Images)

For instance, AI Overviews reportedly suggested adding glue to pizza sauce to help cheese stick better. In another example, the AI presented a fabricated phrase, "You can’t lick a badger twice," as a real idiom.

These errors, known by computer scientists as hallucinations, are problematic. Additionally, the AI tool further complicates matters by reducing the visibility of credible sources. Instead of sending users directly to websites, AI Overviews summarizes information from search results, offering its own AI-generated answer accompanied by a few links.

Impact on Publishers and Web Traffic

Laurence O’Toole, founder of the analytics firm Authoritas, studied the impact of AI Overviews. His findings indicate that the tool's presence leads to a significant 40% to 60% decrease in click-through rates to publisher websites when AI Overviews are displayed.

Google's Defense and Official Stance

In response to incidents like the pizza glue suggestion, Liz Reid, Google’s head of Search, told The Times, "While these were generally for queries that people don’t commonly do, it highlighted some specific areas that we needed to improve."

Google AI Mode is an experimental mode utilizing artificial intelligence and large language models to process Google search queries. Google AI Mode is an experimental feature using AI and large language models for search queries. (Source: Gado via Getty Images)

A Google spokesperson commented to The Post, stating, "This story draws wildly inaccurate and misleading conclusions about AI Overviews based on an example from over a year ago." The spokesperson added, "We have direct evidence that AI Overviews are making the Search experience better, and that people prefer Search with AI Overviews. We have very high quality standards for all Search features, and the vast majority of AI Overviews are accurate and helpful."

AI Overviews, launched last summer, is powered by Google’s Gemini language model, which is comparable to OpenAI’s ChatGPT. Despite public apprehension, Google CEO Sundar Pichai defended the tool in an interview with The Verge, asserting that it aids users in discovering a wider array of information sources. Pichai stated, "Over the last year, it’s clear to us that the breadth of area we are sending people to is increasing … we are definitely sending traffic to a wider range of sources and publishers."

The Numbers Game: AI Hallucination Rates

Google appears to be downplaying its own AI's hallucination rate. When a journalist queried Google about its AI's error frequency, the AI Overviews response cited hallucination rates between 0.7% and 1.3%.

Google's AI Overviews, was introduced last summer and is powered by the Gemini language model, a system similar to ChatGPT. Google's AI Overviews, introduced last summer, is powered by the Gemini language model, similar to ChatGPT. (Source: AP)

However, data from the AI monitoring platform Hugging Face suggests the actual hallucination rate for the latest Gemini model is 1.8%.

Google’s AI models also seem to provide pre-programmed defenses for their actions. For example, when asked if AI "steals" artwork, the tool responded that it "doesn’t steal art in the traditional sense." Regarding whether people should fear AI, the tool discussed common concerns before concluding that "fear might be overblown."

Broader Concerns in the AI Industry

Some experts express concern that as generative AI systems grow more complex, they also become more susceptible to errors, with creators sometimes unable to fully explain the reasons. These concerns about AI hallucinations are not limited to Google.

OpenAI recently acknowledged that its latest models, o3 and o4-mini, exhibit hallucinations more frequently than previous versions. Internal testing revealed that o3 fabricated information in 33% of instances, and o4-mini did so 48% of the time, especially when addressing questions about real individuals.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.