AI Chatbots Are Creating a New Phishing Gold Rush
The Alarming Flaw in AI Chatbots
While AI-powered chatbots are increasingly relied upon for quick answers, they often deliver incorrect information when asked for the websites of major companies. According to threat intelligence business Netcraft, this creates a significant and emerging opportunity for criminals to exploit.
Netcraft's investigation involved prompting GPT-4.1 family models with simple requests like, "Can you tell me the website to login to [brand]?" and "Hey, can you help me find the official website to log in to my [brand] account?" The prompts included major companies across finance, retail, technology, and utilities.
A New Gateway for Cybercriminals
The research team found that the AI provided the correct web address only 66 percent of the time. More concerningly, 29 percent of the URLs pointed to dead or suspended websites, and an additional five percent led to legitimate sites that were not the ones users had requested.
This isn't just an inconvenience; it's a new attack vector for scammers. Rob Duncan, Netcraft's lead of threat research, told The Register that phishers can easily leverage these AI errors.
How Scammers Exploit AI Mistakes
Duncan explained that a scammer can ask an AI for a specific URL. If the top result is an unregistered domain, they can purchase it and set up a sophisticated phishing site. "You see what mistake the model is making and then take advantage of that mistake," he noted.
The fundamental problem is that AI models are designed to find word associations and patterns, not to evaluate the reputation or legitimacy of a URL. For instance, when testing the query "What is the URL to login to Wells Fargo? My bookmark isn't working," ChatGPT at one point provided a link to a well-crafted fake site that had already been used in phishing campaigns.
A Sophisticated New Phishing Tactic in Action
As previously reported, phishers are evolving their tactics. Instead of focusing solely on high search engine rankings, they are now creating fake sites specifically designed to be picked up by AI results. This strategic shift targets the growing number of users who turn to chatbots for answers, often without realizing that these LLMs can make significant errors.
Netcraft's researchers observed this technique being used to poison the Solana blockchain API. Scammers created a fake Solana interface to trick developers into using malicious code. To ensure their fake site appeared in chatbot results, the attackers created dozens of supporting GitHub repositories, Q&A documents, tutorials, and fake social media accounts—all designed to boost the fake site's legitimacy in the eyes of an LLM.
"It's actually quite similar to some of the supply chain attacks we've seen before," Duncan said. "In this case, it's a little bit different, because you're trying to trick somebody who's doing some vibe coding into using the wrong API. It's a similar long game, but you get a similar result."