AI Security Gaps Exposed In New IBM Report
The Alarming Link Between AI and Data Breaches
Artificial Intelligence applications are rapidly becoming a prime target for cybercriminals, yet enterprise security measures are struggling to keep up. According to an IBM survey conducted with the Ponemon Institute across 600 organizations, a concerning trend is emerging.
While security incidents directly involving AI models remain relatively low, occurring in just 13% of reported breaches, the underlying cause is a major red flag. A staggering 97% of the businesses that did suffer an AI-related incident lacked the proper access controls. This fundamental security failure, often stemming from compromised apps, APIs, or plug-ins, led to much wider data compromise and operational disruption.
At the same time, attackers are weaponizing AI, with about one in six breaches now being driven by AI-powered tools that generate sophisticated phishing attempts or convincing deepfake impersonations.
The Governance Gap Why Businesses Are Unprepared
Despite the clear and present danger, AI adoption is outpacing the implementation of adequate security and governance policies. The IBM report highlights that more than three in five enterprises have either failed to establish AI governance policies or are still in the preliminary stages of developing them.
Even among the organizations that have policies, less than half have an established approval process for new AI deployments or perform regular audits to detect unsanctioned or "shadow" AI. This creates a dangerous blind spot in their security posture.
“The data shows that a gap between AI adoption and oversight already exists, and threat actors are starting to exploit it,” stated Suja Viswesan, VP of Security and Runtime Products at IBM.
The High Cost of Shadow AI
The financial and reputational consequences of this lack of oversight are severe. Approximately one in five enterprises that experienced a data breach traced the root cause to shadow AI.
The global average cost of a data breach is already a steep $4.4 million. However, for companies with high levels of unsanctioned AI, that cost inflates by an additional $670,000. This makes the rise of shadow AI one of the top three most costly factors in a breach, even surpassing the impact of a security skills shortage. Critically, unauthorized AI use was also linked to a higher volume of compromised personally identifiable information (PII) and intellectual property.
“The cost of inaction isn't just financial, it's the loss of trust, transparency and control,” Viswesan warned. Without proactive governance, the immense potential of AI can quickly transform into a significant and costly liability.