Back to all posts

Wondershare RepairIt Flaws Leak User Data And AI Models

2025-09-25The Hacker News4 minutes read
Cybersecurity
AI Security
Data Breach

Software security vulnerabilities

Cybersecurity researchers have brought to light two severe security flaws in the popular Wondershare RepairIt application. These vulnerabilities present a significant danger, potentially exposing sensitive user data and opening the door for sophisticated AI model tampering and supply chain attacks.

The Critical Vulnerabilities Explained

The two critical-rated vulnerabilities were discovered by researchers at Trend Micro and are identified as follows:

  • CVE-2025-10643 (CVSS score: 9.1): An authentication bypass flaw related to the permissions granted to a storage account token.
  • CVE-2025-10644 (CVSS score: 9.4): A similar authentication bypass vulnerability, this time within the permissions granted to an SAS token.

Successfully exploiting these issues could allow an attacker to bypass the system's authentication protections. This access could then be leveraged to launch a supply chain attack, potentially leading to arbitrary code execution on the computers of the software's users.

Poor Security Practices and Widespread Risk

According to Trend Micro researchers Alfredo Oliveira and David Fiser, the AI-powered application was found to have DevSecOps practices that inadvertently leaked private user data, contradicting its own privacy policy. The core of the issue lies in poor development choices, such as embedding overly permissive cloud access tokens directly into the application's code.

These tokens granted both read and write access to sensitive cloud storage where user data was stored without encryption. This oversight potentially allows for widespread abuse of users' uploaded photos and videos. The problem is compounded by the fact that the exposed storage contained more than just user data; it also held AI models, software binaries for other Wondershare products, container images, scripts, and even company source code. This creates a direct pathway for attackers to tamper with AI models or software executables, setting the stage for devastating supply chain attacks targeting all downstream customers.

AI security diagram

Vendor Inaction and User Recommendations

Trend Micro reported that it responsibly disclosed these vulnerabilities through its Zero Day Initiative (ZDI) back in April 2025. However, despite repeated attempts to make contact, the company has yet to receive a response from Wondershare. In the absence of a patch or official guidance, users are advised to "restrict interaction with the product" to avoid potential risks.

Trend Micro noted that the rush to innovate and release new features can often lead organizations to overlook critical security implications. This case highlights the importance of integrating robust security processes throughout the entire development lifecycle.

The Bigger Picture: AI and Emerging Security Threats

This incident is a symptom of a larger problem in the rapidly evolving world of AI development. Trend Micro has previously warned about the dangers of exposing Model Context Protocol (MCP) servers without proper authentication. These servers can act as an open door to an organization's most sensitive data sources.

Similarly, research has shown that exposed container registries can be abused by attackers to poison AI models. An attacker could pull a legitimate AI model, tamper with its parameters to introduce malicious behavior, and then push it back to the registry for unsuspecting users to download.

The rapid, often unsecured, adoption of AI tools introduces entirely new attack vectors, including tool poisoning, prompt injection, and privilege escalation.

New Attack Vectors: Indirect Prompt Injection and LitL

Researchers are continuously discovering novel ways to exploit AI systems. Palo Alto Networks Unit 42 recently detailed how AI code assistants are vulnerable to indirect prompt injection attacks. In this scenario, an attacker embeds malicious prompts within external data sources. When a developer provides this tainted data to the AI assistant, the hidden prompt can trick the tool into leaking data or injecting backdoors into code.

Large language model graphic

Another sophisticated method is the "lies-in-the-loop" (LitL) attack, described by Checkmarx researchers. This attack abuses the trust between a human user and an AI agent by feeding the agent misleading context. The agent then presents this false, seemingly safe information to the human user, convincing them to approve a malicious action and effectively bypassing human-in-the-loop safety controls.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.