Artificial intelligence (AI) is quietly, and in many cases invisibly, embedding itself into many organisations – and often without formal oversight.
For chief information security officers (CISOs), this presents a profound shift in risk, and one that traditional controls are struggling to contain.
According to Saviynt’s recently released 2026 CISO AI Risk Report[1], AI is no longer experimental. It is operational, active, and in many cases acting with privileges that organisations neither fully understand nor explicitly granted.
A New Class of Identity
Unlike human users or conventional service accounts, AI systems operate with a level of autonomy and speed that challenges the foundations of identity security. These systems are reading customer data, invoking APIs, modifying configurations, and even chaining actions together – often without clear attribution.
This creates a fundamental problem for security teams as the basic questions that underpin governance – such as who performed an action, and whether it was authorised – are becoming harder to answer.
AI identities do not behave like people, they do not fit neatly into existing access models, and the scale of the issue is already significant. According to the report, more than 70% per cent of organisations say AI tools have access to core business systems such as CRM and ERP platforms, yet only a fraction say that access is effectively governed.
Visibility Is Falling
If access is the first problem, visibility is the second – and arguably the more urgent. Most organisations simply do not know where AI is operating or what it is doing.
This is not a minor oversight. AI systems are now embedded across SaaS platforms, cloud workloads, and internal applications, often creating or modifying their own identities in the process.
The consequence is a growing ‘visibility crisis’. Security teams are left piecing together fragmented signals from multiple systems, and often after the fact.
This lack of visibility feeds directly into a broader governance failure. Most organisations have yet to extend formal access policies to AI identities, leaving them to operate outside established controls. More than 80% of security leaders admit they do not enforce access policies for AI systems, and only a small minority believe they could contain a compromised AI agent.
This is particularly concerning given the nature of AI behaviour. Nearly half of organisations report having already observed unintended or unauthorised actions from AI agents, while a third have experienced a security incident or near miss linked to these systems.
The Rise of Shadow AI
Compounding the challenge is the rapid spread of ‘shadow AI’, which occurs when tools are deployed by business units or individuals without formal approval.
According to the report, three-quarters of organisations have identified unsanctioned AI tools running within their environments, often with embedded credentials or elevated access privileges.
These tools are not confined to simple productivity assistants, and many integrate directly with enterprise systems. This is creating new trust relationships with third-party providers and expanding the attack surface.
For CISOs, this represents a shift from controlled deployment to reactive containment. Rather than preventing adoption, security teams are increasingly focused on discovering and governing AI systems after they have already been introduced.
Legacy Tools Prove Inadequate
Underlying these challenges is a deeper structural issue. The tools organisations rely on were not designed for an AI-driven environment.
Many enterprises continue to apply traditional controls such as login-based authentication and static access policies to systems that operate via APIs, tokens, and autonomous decision making. As a result, enforcement lags behind activity, and risk accumulates in the gaps.
Only a quarter of organisations have implemented AI-specific monitoring or governance controls. The majority are attempting to manage machine-speed risk using fragmented solutions built for slower, human-centric workflows.
This mismatch is becoming untenable. As AI systems scale, the limitations of legacy identity frameworks are becoming increasingly apparent, forcing organisations to rethink their approach.
Identity Is the New Perimeter
Amid this disruption, one principle is emerging as a unifying theme: identity is becoming the primary control layer.
As traditional network boundaries dissolve in cloud and hybrid environments, identity offers a consistent point of enforcement. It is where access decisions are made, privileges are defined, and activity can be monitored in context.
Security leaders are beginning to respond accordingly. Investment is shifting towards identity discovery, continuous monitoring, and real-time analytics, with a focus on automating responses to emerging threats.
In more mature environments, this translates into automated lifecycle management for AI identities, just-in-time access controls, and policy-driven remediation when anomalies are detected.
The broader implication is clear: AI is reshaping the cybersecurity landscape faster than organisations can adapt. For CISOs, the challenge is no longer whether to govern AI, but how quickly they can close the gap between deployment and control.
[1] Saviynt.com





