AI Broke Cybersecurity and Corporate Leadership Still Hasn’t Noticed
Posted: Monday, Jan 26
  • KBI.Media
  • $
  • AI Broke Cybersecurity and Corporate Leadership Still Hasn’t Noticed
Karissa Breen, more commonly known as KB, is crowned a LinkedIn ‘Top Voice in Technology’, and widely recognised across the global cybersecurity industry. A serial entrepreneur, she is the co-founder of the TMFE Group, a portfolio of cybersecurity-focused businesses spanning an industry-leading media platform, a specialist marketing agency, a content production studio, and the executive headhunting firm, MercSec. Now based in the United States, KB oversees US editorial operations and leads the expansion of the group’s media footprint across North America, while maintaining a strong presence in Australia, and the broader global market. She is the former Producer and Host of the streaming show 2Fa.tv, and currently sits at the helm of journalism for the group’s flagship arm, KBI.Media, the independent cybersecurity media company. As a cybersecurity investigative journalist, KB hosts her globally-renowned podcast, KBKast, where she interviews leading cybersecurity practitioners, CISOs, government officials including heads-of-state, and industry pioneers from around the world. The podcast has been downloaded in over 65 countries with more than 400,000 global downloads, influencing billions of dollars in cybersecurity budgets. KB is known for asking the hard questions and extracting real, commercially relevant insights. Her approach provides an uncoloured, strategic lens on the evolving cybersecurity landscape, demystifying complex security issues and translating them into practical intelligence for executives navigating risk, regulation, and rapid technological change.

i 3 Table of Contents

AI Broke Cybersecurity and Corporate Leadership Still Hasn’t Noticed

​Code is now being written and shipped faster than most security programs were ever built to withstand. Developers are moving at speed. Product teams are pushing releases earlier. And companies are racing to stay competitive often without fully grasping the risks they’ve just signed up for.

Sonali Chaudhuri, Founder of MySmartOps commented,

“…Exposing ourselves to greater risks like data loss or adversary gaining infrastructure access.”

Too many organisations are diving head first into AI without first deciding how much risk they’re actually willing or able to tolerate. Interestingly, corporate leadership is still catching on to the reality of what’s really going on here.

For years, security strategy focused on the perimeter. Cloud erased that. Now AI has rewritten the playbook entirely and many organisations are spiralling, unsure what matters anymore or where to focus.

“Supply chain components or software components, plugins that have been poisoned through the API chain and basically enter your development environment,” Chaudhuri warned.

Developer environments have become the prime target. Poisoned plugins, compromised APIs and malicious dependencies slipping quietly into build pipelines and straight into production.

Meanwhile, boards remain fixated on MFA rollouts and phishing simulations, while attackers move upstream exploiting software supply chains and developer tooling that remain largely ungoverned. At the same time, a quieter and more aggressive threat is accelerating, that being machine identities.

“We've seen several breaches in the last few months where non-human identities are being exploited,” Chaudhuri said.

Service accounts, automation tokens and CI/CD credentials now outnumber human users, yet they rarely rotate, steadily accumulate privilege and often persist long after their original purpose disappears. Nearly half of organisations report incidents tied to machine managed identities, but leadership responses remain slow and fragmented.

“You have to treat these machine identities like secrets,” Chaudhuri stressed.

Used deliberately, AI-assisted development can deliver significant strides. But the starting point matters. Legacy applications, low criticality, brittle, undocumented systems are the right testing ground.

“We start really small,” Chaudhuri explained. “Using AI to add code comments, clarifying flow, and gradually adopt AI generated suggestions.”

Organisations are doing the opposite and deploying AI straight into critical systems where failure carries regulatory, financial, or safety consequences. The intent and ambition makes sense, but the execution is frankly poor judgement.

“The real question is how resilient the organisation is when things don’t go as per plan,” Chaudhuri said.

Mature organisations assume failure is possible and prepare for it. Immature ones rely on hope and post-incident explanations. Hope is not a strategy.

Security can no longer be bolted on after deployment which is obvious. However, resilience can’t be retrofitted under pressure. AI has now eliminated the buffer time organisations once relied on to react. Time is to make decisions is running out, and so is the patience of security teams trying to manage demands.

AI will not slow down. Development will not renege to a safer pace. And security teams will not ‘catch up’ without clear executive intent.

“The key to successful adoption is again balancing innovation and agility with the risk comfort mindset,” Chaudhuri commented.

The organisations that win this next phase won’t be the loudest or the fastest. They’ll be the ones that understand their risk, secure what they can’t see and build resilience into how software is created not after it breaks.

Share This