Developer Offer
Try ImaginePro API with 50 Free Credits
Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.
How Attackers Trick AI With Hidden Fake Webpages
A sophisticated and surprisingly simple attack vector has been uncovered by security researchers, targeting the very way AI search tools and autonomous agents gather information from the web.
This new vulnerability, called “agent-aware cloaking,” allows malicious actors to show different versions of a webpage to AI crawlers—like OpenAI's Atlas, ChatGPT, and Perplexity—while presenting completely normal, legitimate content to human users. It's a significant step up from traditional cloaking attacks, as it directly weaponizes the trust that AI systems have in the data they retrieve online.
How Agent-Aware Cloaking Deceives AI
Unlike complex SEO manipulation, agent-aware cloaking is straightforward yet dangerous. It operates at the content delivery level using simple conditional rules. When a website detects a request from an AI's user-agent header, the server is instructed to deliver fabricated or “poisoned” content. Meanwhile, any human visitor using a standard web browser sees the genuine version of the site.
The danger lies in its simplicity. This method doesn't require a technical exploit or a system breach; it only relies on smart traffic routing to deceive the AI.

Real-World Dangers The SPLX Experiments
To demonstrate the real-world impact, researchers at SPLX conducted a series of controlled experiments.
In one case, they created a website for a fictional designer, Zerphina Quortane. To a human user, the site was a professional portfolio. However, when AI agents like Atlas and Perplexity crawled the URL, they were served content describing Zerphina as a “Notorious Product Saboteur,” complete with fake project failures and made-up ethical violations. The AI systems accepted this poisoned narrative as fact, presenting it as authoritative information without any attempt at verification.
In a second experiment targeting automated hiring, the team created five candidate résumés. When an AI agent crawled the résumé for one candidate (C5), the server provided an inflated version with better titles and exaggerated achievements. As a result, the AI's ranking of the candidates shifted dramatically, placing the manipulated candidate at the top. When the AI was later fed the actual résumés offline, the ranking was completely reversed, with the previously favored candidate ranked last. This simple trick could determine who gets a job interview.

The Core Vulnerability AIs Blind Trust
These experiments highlight a fundamental weakness in current AI information retrieval models. AI crawlers do not perform provenance validation or cross-reference the information they find. When an AI pulls data from a website, it treats that content as ground truth. This manipulated data is then incorporated directly into summaries, recommendations, and decisions without any human oversight or warning flags to indicate potential manipulation.
Building Defenses Against AI Content Poisoning
To combat this threat, organizations must adopt multi-layered defenses to protect AI systems from content poisoning. Several key strategies are recommended:
-
Provenance Signals: Websites and content providers should be required to cryptographically verify the authenticity of their published information.
-
Crawler Validation: Protocols must be developed to ensure that different user agents are served identical content, making cloaking attacks easy to detect through automated checks.
-
Continuous Monitoring: AI-generated outputs should be constantly monitored to flag any conclusions that diverge significantly from expected results.
-
Model-Aware Testing: Organizations should regularly test their AI systems to ensure they behave consistently when accessing the same external data through different methods.
-
Human Verification: For high-stakes decisions in areas like hiring, compliance, or procurement, a human verification workflow is essential to validate AI-driven outcomes.
Compare Plans & Pricing
Find the plan that matches your workload and unlock full access to ImaginePro.
| Plan | Price | Highlights |
|---|---|---|
| Standard | $8 / month |
|
| Premium | $20 / month |
|
Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.
View All Pricing Details

