Back to all posts

New AI Attack Method Can Control What Machines See

2025-07-01Unknown4 minutes read
AI
Cybersecurity
Computer Vision

A New Threat to AI Vision Introducing RisingAttacK

Researchers have unveiled a powerful new method for attacking artificial intelligence vision systems, giving hackers the ability to control what an AI perceives in an image. This new technique, developed at North Carolina State University and named RisingAttacK, has proven effective against the most common computer vision systems in use today, raising important questions about AI security.

What Are Adversarial Attacks and Why Do They Matter

The core issue lies with what are known as “adversarial attacks.” These are malicious efforts where an attacker subtly manipulates the data fed into an AI, causing it to misinterpret what it “sees.” The real-world implications are significant. For example, an attacker could alter an autonomous vehicle’s AI to prevent it from detecting pedestrians, other cars, or traffic signals. In the medical field, a compromised X-ray machine could feed an AI manipulated images, leading to dangerously inaccurate diagnoses.

“We wanted to find an effective way of hacking AI vision systems because these vision systems are often used in contexts that can affect human health and safety – from autonomous vehicles to health technologies to security applications,” explains Tianfu Wu, a co-corresponding author of the research paper and an associate professor at NC State. “That means it is very important for these AI systems to be secure. Identifying vulnerabilities is an important step in making these systems secure, since you must identify a vulnerability in order to defend against it.”

How RisingAttacK Deceives AI Systems

RisingAttacK works through a sophisticated, multi-step process designed to make the smallest possible changes to an image to achieve the desired deception.

First, the program identifies all the visual features within an image. It then determines which of those features are most critical for the AI to achieve a specific goal, such as identifying a car. As Wu puts it, “if the goal of the attack is to stop the AI from identifying a car, what features in the image are most important for the AI to be able to identify a car in the image?”

Next, RisingAttacK calculates how sensitive the AI is to changes in the data related to those key features. “This requires some computational power, but allows us to make very small, targeted changes to the key features that makes the attack successful,” Wu notes. The result is an image that looks identical to the original to human eyes, but the AI is completely fooled. A car clearly visible to a person in the manipulated image might be completely invisible to the AI.

This technique is not limited to a single object. The researchers state that RisingAttacK can influence an AI's ability to see any of the top 20 or 30 targets it was trained to identify, including cars, pedestrians, bicycles, and stop signs.

Proven Effective and Looking Ahead

The research team rigorously tested RisingAttacK against four of the most widely used vision AI programs: ResNet-50, DenseNet-121, ViTB, and DEiT-B. The attack was successful in manipulating all four, demonstrating its broad effectiveness.

The team is not stopping here. “While we demonstrated RisingAttacK’s ability to manipulate vision models, we are now in the process of determining how effective the technique is at attacking other AI systems, such as large language models,” Wu says. The ultimate objective is clear: “Moving forward, the goal is to develop techniques that can successfully defend against such attacks.”

Access the Research and Test the Tool

The research paper, titled “Adversarial Perturbations Are Formed by Iteratively Learning Linear Combinations of the Right Singular Vectors of the Adversarial Jacobian,” is set to be presented at the International Conference of Machine Learning. To help the broader community identify and patch vulnerabilities, the research team has made RisingAttacK publicly available. The program can be found on the team's GitHub repository.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.