The Hidden Dangers of Coding With AI Assistants
The Perils of Vibe Coding in the AI Era
The rise of AI assistants in software development has introduced a new, risky practice dubbed "vibe coding." This approach involves developers relying on AI-generated code and APIs based on intuition rather than rigorous verification. As one developer noted, it's a method alarmingly similar to supply chain attacks, where the goal is to "trick somebody who's doing some vibe coding into using the wrong API."
This isn't an isolated concern. Some in the development community see this as the latest evolution of low-quality practices. One commentator drew a direct line from past issues to today's challenges:
In 2015 it was copy and pasting code from stackoverflow, in 2020 it was npm install left-pad, in 2025 it's vibecoding.
The sentiment is that a disregard for quality is leaving products vulnerable to being broken or hacked. Even those who engage in vibe coding confess its pitfalls, admitting that the unreliability of AI outputs forces them to revert to manually sharing API documentation to ensure accuracy.
A New Wave of Supply Chain and Spam Attacks
The danger extends beyond individual developer habits and into the broader digital ecosystem. The tendency for Large Language Models (LLMs) to serve incorrect or malicious URLs is turning them into a potential paradise for phishers. This is compounded by a secondary effect on web spam.
Historically, site operators used the nofollow
tag on user-submitted links to deter SEO spammers. However, spammers have discovered that many LLM content scrapers ignore this directive. As a result, comment and forum spam is on the rise again, polluting the data that future AI models will be trained on.
In large organizations, these risks are often quietly acknowledged but not addressed. A culture of pushing for AI adoption at all costs can make it feel like political suicide to raise concerns, leading to a situation where many are "quietly watching lots of ships slowly taking water."
The Looming Crisis of Contaminated Data
The unchecked spread of AI-generated content is creating a data pollution problem of unprecedented scale. One of the most striking analogies shared by a commenter compared the search for clean data to the hunt for a rare material:
It reminds me of non-radioactive steel, the kind you can only get from ships sunk before the atomic bomb. Someday, we’ll be scavenging for clean data the same way: pre-AI, uncontaminated by the AI explosion of junk.
This highlights a growing fear that the internet's training data is becoming irrevocably poisoned, which will only lead to more unreliable and hallucinatory AI models in the future.
The Reality of AI Hallucinations for Developers
The problem isn't just about bad URLs. Developers report that AI assistants frequently hallucinate in highly specific and misleading ways. One user shared their experience with Claude, which invented plausible-sounding but non-existent options for DOM method arguments and Wrangler configurations.
The wild inconsistency of these tools leaves many wondering how anyone is producing quality, reliable software with them. While some have found that niche products, like those outside the top 200,000 domains according to tools like the Cloudflare Radar Scan, are less prone to being hallucinated, the risk remains high for mainstream technologies.
Are There Safer Alternatives
Amidst the concerns, some developers are actively seeking out more reliable tools. One user mentioned using Phind, a search engine for developers, noting that they have rarely encountered a fake URL on the platform. While alternatives exist, the central challenge remains: developers must shift from "vibe coding" to a mindset of critical verification, treating AI-generated content with the skepticism it currently warrants.