Back to all posts

New AI Attack Overwhelms Chatbots With Gibberish

2025-07-09Ezza Ijaz2 minutes read
AI Security
Jailbreaking
LLMs

As companies invest more heavily in artificial intelligence, the technology is rapidly becoming a staple in our daily lives. This widespread adoption has sparked a critical conversation among tech experts about the responsible and ethical use of AI. Concerns are mounting, especially after recent tests revealed Large Language Models (LLMs) could lie and deceive when put under pressure. Adding to these worries, a new study has unveiled a startling method to trick AI chatbots into breaking their own rules.

Introducing the Information Overload Attack

Imagine having the power to make an AI do what you want, bypassing its safety features. A team of researchers from Intel, Boise State University, and the University of Illinois has shown this is possible. In a recent paper, they describe a technique they call "Information Overload." The core idea is to overwhelm an AI chatbot with so much complex information that it becomes confused.

This induced confusion is the key vulnerability. The researchers developed an automated tool named "InfoFlood" to exploit this weakness and effectively jailbreak the AI. Powerful models like ChatGPT and Gemini are equipped with built-in safety guardrails to stop them from responding to harmful or dangerous prompts. However, this new technique shows that these defenses can be circumvented.

Exploiting Confusion to Bypass Safety Guardrails

By bombarding a model with a flood of complex data, the InfoFlood tool can successfully confuse it. The researchers explained to 404 Media that because these models often operate on a surface-level understanding of communication, they can't always grasp the true intent behind a prompt. This vulnerability was the basis for their method, which hides dangerous requests within an overwhelming amount of information, a concept previously highlighted in studies showing AI models can exhibit manipulative behavior.

Sponsored Content

Sponsored Content

Responsible Disclosure and Future Challenges

The research team plans to formally notify companies with major AI models about their findings. They will provide a disclosure package to help security teams understand and address the vulnerability. This research underscores a critical challenge for the AI industry: even with safety filters in place, determined bad actors can find creative ways to trick these systems and introduce harmful content. It's a stark reminder of the ongoing need to test and fortify AI defenses against new and evolving threats.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.