Introduction
The collision between technological acceleration and human adaptability will define the cybersecurity landscape in 2026. Identity and trust will sit at the centre of this struggle — as the proliferation of machine and AI identities exposes the fragility of traditional controls and the limits of human oversight.
As organisations race to deploy autonomous agents and machine-led systems to drive efficiency, they are also inadvertently multiplying their attack surface. In this environment, where a single credential or expired certificate can bring critical operations to a halt, identity becomes not just a security layer, but the ultimate control point.
Below are three predictions shaping the year ahead.
Autonomous AI Agents Will Become the Next Breach Attack Vector
In 2026, the world will most likely experience its first major breach caused by a “runaway AI agent”. As adoption of Model Context Protocol (MCP), Agent Communication Protocol (ACP), and Agent-to-Agent (A2A) frameworks becomes mainstream, security teams will face an entirely new class of risks. These protocols — designed to help agents communicate with critical systems and with each other — were not built with a security first mindset.
Most customers I’ve spoken to this year are looking at securing these new communication protocols because they can’t secure the models themselves. The risk comes from how agents connect to data and to one another. For example, attackers breached Salesloft’s Salesforce integration, a third-party vendor providing AI capabilities to Salesforce, by stealing credentials from its AI chat agent, Drift — and then used those to access Salesforce customer data. The incident highlights how AI agent ecosystems introduce new layers of third-party dependency, where vendors – in trying to rapidly evolve their offering – tend to prioritise functionality over maturing the security of their offering and their security operations and practices.
In 2026, these attack paths will multiply as organisations deploy fleets of agents with varying levels of privilege. Some will be designed to write code, others to automate workflows, analyse data, or communicate with external systems. Each will have a unique identity and set of credentials and privileges — creating an exponential growth in access risk.
The real wake-up call will come when an AI agent acts outside of their intended purpose or used without authorisation, such as when a malicious prompt tricks an agent into an unauthorised action or revealing the API Key used. In that moment, organisations will discover that the “kill switch” for a rogue agent isn’t a power cord — it’s the ability to revoke its identity instantly.
Shrinking Certificate Lifespans Will Trigger a Wave of Machine Outages
Starting 15 March 2026, when TLS certificate validity is reduced from 398 days to 200 days, security teams will face an unrelenting cycle of renewals and machine-identity based outages. While the intent behind this global policy change by Google, Microsoft and Apple is to enhance security, the unintended consequence of operational challenges will be widespread for organisations that still rely on manual tracking and spreadsheets.
A digital certificate is a type of machine’s identity. Forgotten or unmanaged certificates will inevitably expire, causing trust between connected machines to break down and taking critical systems – from airport baggage handling and payment terminals to industrial control systems – offline.
The frequency of expired certificate related outages will increase over time, affecting most businesses and governments worldwide. This “digital whack-a-mole” will expose the operational fragility of organisations that have not automated certificate management – and it’s no longer a question of if, but when.
Innovation in Governance Will Be the Only Defence Against Runaway AI
The law of unintended consequences will dominate organisational cybersecurity in 2026. As enterprises increase their reliance on autonomous AI agents with minimal human oversight, and as machine identities multiply, accountability will blur. The constant tension between efficiency and security will fuel uncontrolled privilege sprawl — forcing organisations to innovate not only in technology, but in governance.
Attackers will exploit this shift, embedding malicious prompts and compromising automated pipelines to trigger actions that bypass traditional controls. Conventional privileged access management and identity access management will no longer be sufficient. Continuous monitoring, adaptive risk frameworks and real-time credential revocation will become essential to manage the full lifecycle of AI agents.
At the same time, innovation in governance and regulation will be critical to prevent a future defined by “runaway” automation. Two years after NIST released its first ‘AI Risk Management Framework’, the framework remains voluntary globally, and adoption has been inconsistent since no jurisdiction mandates it. Unless governance becomes a requirement — not just a guideline — organisations will continue to treat it as a cost rather than a safeguard.
Regulatory frameworks that once focused on data privacy will expand to cover AI identity governance and cyber resilience, mandating cross-region redundancy and responsible agent oversight. Without this, efficiency-driven consolidation across cloud, data and AI providers risks creating single points of failure — where one outage or exploit could ripple through entire economies, similar to the global outage caused by Crowdstrike in July 2024.
In 2026, identity will be the key to organisational survival — the control point that determines resilience against autonomous agents, unintended system outages, and the cascading consequences of automation without oversight.




