AI is a Ticking Time Bomb for Your Data, Reveals New Report From Varonis
While reading the latest report from Varonis about the impact of AI on data risk, the opener of the old Mission: Impossible TV show kept running through my head. You know the one, where a fuse is lit and the spark makes its way along a circuitous route before igniting a time bomb in the center of the screen.
Props to the show’s creators for ratcheting up the tension while the credits roll, before even a single scene plays out — engaging viewers in a race against time before a bomb explodes. According to the Varonis report, that’s about where we stand with AI and its potential to blow up data protections, so to speak.
Varonis CEO, president and co-founder Yaki Faitelson, on the release of the report, acknowledges the very real productivity gains from AI but cautions that the pressure-induced push by CIOs and CISOs “to adopt AI at warp speed,” which is driving data security platforms adoption, requires “a data-centric approach to security” to avoid AI-related data breaches.
The State of Data Security Report: Quantifying AI’s Impact on Data Risk tapped the data risk assessments of 1,000 organizations and found that at 99% of them, sensitive data has been exposed to AI tools. Equally alarming, those tools have access to 90% of sensitive cloud data, which includes AI training data.
Blinking hot on defenders’ radar is the perpetuation of shadow AI. Almost all organizations (98%) whose risk assessments were reviewed have unverified apps, including shadow AI, in their environments.
That’s of concern to Nicole Carignan, senior vice president, security & AI strategy at Darktrace, whose company published similar indicators that shadow AI is problematic. The State of AI Cybersecurity 2025 Report found “the escalation of shadow AI has introduced greater risk with security teams progressively seeking ways to lock down their data,” says Carignan. “If it remains unchecked, this raises serious questions and concerns about data loss prevention and compliance concerns as new regulations start to take effect.”
Shadow AI will continue to be problematic as long as workers use unvetted AI-powered work apps, “unapproved personal AI apps and existing apps that now embed AI components on their mobile devices,” says Krishna Vishnubhotla, vice president, product strategy at Zimperium.
“These apps often bypass security policies that haven’t evolved for AI, creating blind spots where sensitive data can be exposed,” Vishnubhotla says, noting the unique risk of mobile devices that “operate outside traditional network perimeters and may lack adequate security controls.”
But, he points out, the problem is much bigger. “Shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways,” he says. Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control, creating a new set of security and compliance risks that organizations find harder to track and mitigate.
That’s especially true as new regulations come online. The fines from violations and the resultant mitigation costs can amplify the financial impact of shadow AI, costing organizations millions or even billions of dollars.
Fight AI With AI?
“Federal agencies handling vast amounts of sensitive or classified information, financial institutions and healthcare organizations are particularly vulnerable,” says Vishnubhotla, because they collect and analyze huge volumes of data considered by threat actors to be high value. That makes AI tools attractive to them.”
Perhaps then, the best way to stop a bad guy with AI is a good guy with AI. The spectre of shadow AI will drive “an increasing need for AI asset discovery, the ability for companies to identify and track the use of AI systems throughout the enterprise,” says Carignan. “It is imperative that CIOs and CISOs dig deep into new AI security solutions – asking comprehensive questions about data access and visibility.”
Satyam Sinha, CEO and co-founder at Acuvity, believes that “the gap in confidence and understanding of AI creates a massive opportunity for AI native security products to be created, which can ease this gap.”
And Carignan expects “an explosion of tools that use AI and GenAI within enterprises and on devices used by employees.” In addition to managing AI tools that are built in-house, security teams will see an upsurge in the volume of existing tools that have new AI features and capabilities embedded,” she says.
Those AI-driven security solutions will be able to “identify embedded AI components, detect anomalies in mobile traffic, flag unauthorized AI interactions and prevent data from being exfiltrated through unapproved applications,” Vishnubhotla says. And within the mobile ecosystem, AI-powered behavioral analysis powered “can help agencies identify unauthorized AI apps running on devices and enforce security policies in real time.”
Sinha: We have to consider the use of Gen-AI native security products and techniques, which will help achieve a multiplier effect on the personnel. This is the only way to solve this problem.
But there’s a catch, a big one that of late has drawn the attention of security teams. Without proper vetting, such tools could be easily exploited, Vishnubhotla says.
That’s why it’s important for security teams to “implement mobile threat detection solutions that analyze app behavior, network traffic and endpoint risks to identify unauthorized AI use,” he says, noting that “AI model watermarks and provenance tracking can be useful for identifying data manipulation but are not sufficient on their own — agencies need a layered security approach that includes real-time mobile threat detection, policy enforcement and data loss prevention.”
He suggests agencies create clear policies governing AI usage, “implement mobile threat defense solutions to detect unauthorized AI applications and provide secure, approved AI tools that meet personnel needs.” Education and awareness are critical components—employees must understand the risks of using unapproved AI apps and be pointed to vetted alternatives sanctioned by their organizations.
The fuse has been lit, can defenders execute their security strategies and get there in time… before the opening credits close and the bomb detonates? We’ll see. But for now, the timer has started, and the race is on.
Tick. Tick. Tick. Tick. Tick.