Back to all posts

Hacked Websites Are Gaming ChatGPTs Recommendation System

2025-07-10Matt G. Southern4 minutes read
AI
ChatGPT
Cybersecurity

An investigation by SEO professional James Brockbank has uncovered a troubling vulnerability in how ChatGPT generates business recommendations. It appears the AI can be tricked into citing content from hacked websites and repurposed expired domains, raising serious questions about the reliability of its outputs.

Brockbank, the Managing Director at Digitaloft, didn't conduct a formal academic study but based his findings on personal testing while exploring how brands get featured in ChatGPT’s answers. His analysis suggests that bad actors are exploiting the system by publishing promotional content on compromised or old domains that still carry significant SEO authority. This allows deceptive or irrelevant content to appear as a legitimate recommendation from the AI.

As Brockbank wrote in his report:

“I believe that the more we understand about why certain citations get surfaced, even if these are spammy and manipulative, the better we understand how these new platforms work.”

How Scammers Manipulate ChatGPT's Sources

Brockbank’s research points to two primary methods being used to game ChatGPT's recommendation engine:

1. Using Hacked Websites

In several tests, ChatGPT provided recommendations for gambling sites that were sourced from legitimate, high-authority websites that had been secretly compromised. For instance, a listicle promoting online slots was found hidden on the website of a California-based domestic violence attorney.

Other hijacked sites included a United Nations youth coalition website and a U.S. summer camp portal. Both were used to host gambling-related articles, with some pages even using deceptive tactics like white text on a white background to hide the content from site owners.

2. Repurposing Expired Domains

The second tactic involves buying expired domains that have a powerful backlink profile and rebuilding them to promote entirely different products or services.

One startling example involved a domain with over 9,000 links from top-tier sources like the BBC, CNN, and Bloomberg. The domain, which originally belonged to a UK arts charity, was repurposed to publish articles promoting gambling.

“There’s no question that it’s the site’s authority that’s causing it to be used as a source," Brockbank explained. "The issue is that the domain changed hands and the site totally switched up.”

He discovered other domains, formerly owned by charities and retail businesses, that are now being used to push casino recommendations.

Why These Manipulative Tactics Work

So, why is ChatGPT falling for this? Brockbank suggests the AI's algorithm heavily favors domains with high perceived authority and prioritizes recently published content.

A critical flaw appears to be the system's inability to check if new content is thematically consistent with the website's original purpose.

“ChatGPT prefers recent sources, and the fact that these listicles aren’t topically relevant to what the domain is (or should be) about doesn’t seem to matter,” Brockbank observed.

While getting featured in legitimate "best of" articles is a valid way for businesses to gain visibility in AI results, Brockbank stresses that using hacked or expired domains is a clear ethical breach.

“Injecting your brand or content into a hacked site or rebuilding an expired domain solely to fool a language model into citing it? That’s manipulation, and it undermines the credibility of the platform.”

What This Means for AI Users and Businesses

Although Brockbank's findings are from personal observations, they highlight a critical issue: ChatGPT can be fed manipulated sources and may present them as trustworthy recommendations.

This serves as a crucial reminder for both users and businesses. As AI search becomes more prevalent, establishing genuine authority through high-quality content and earned media is more important than ever. At the same time, this investigation underscores an urgent need for AI developers to enhance their systems' ability to detect and filter such deceptive content.

Until these platforms evolve, it's wise to approach AI-generated recommendations with a healthy dose of skepticism. As Brockbank concluded:

“We’re not yet at the stage where we can trust ChatGPT recommendations without considering where it’s sourced these from.”

To dive deeper into the examples and analysis, you can read the original report from Digitaloft.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.