AI Broke Cybersecurity and Corporate Leadership Still Hasn’t Noticed
Posted: Monday, Jan 26
  • KBI.Media
  • $
  • AI Broke Cybersecurity and Corporate Leadership Still Hasn’t Noticed
Karissa Breen, crowned a LinkedIn ‘Top Voice in Technology’, is more commonly known as KB, and widely known across the cybersecurity industry. A serial Entrepreneur and co-founder of the TMFE Group, a holding company and consortium of several businesses all relating to cybersecurity. These include an industry-leading media platform, a marketing agency, a content production studio, and the executive headhunting firm, MercSec. She is also the former Producer and Host of the streaming show, 2Fa.tv. Our flagship arm, KBI.Media, is an independent and agnostic global cyber security media company led by KB at the helm of the journalism division. As a Cybersecurity Investigative Journalist, KB hosts her renowned podcast, KBKast, interviewing cybersecurity practitioners around the globe on security and the problems business executives face. It has been downloaded in 65 countries with more than 300K downloads globally, influencing billions of dollars in cyber budgets. KB is known for asking the hard questions and getting real answers from her guests, providing a unique, uncoloured position on the always evolving landscape of cybersecurity. She sits down with the top experts to demystify the world of cybersecurity, and provide genuine insight to executives on the downstream impacts cybersecurity advancement and events have on our wider world.

i 3 Table of Contents

AI Broke Cybersecurity and Corporate Leadership Still Hasn’t Noticed

​Code is now being written and shipped faster than most security programs were ever built to withstand. Developers are moving at speed. Product teams are pushing releases earlier. And companies are racing to stay competitive often without fully grasping the risks they’ve just signed up for.

Sonali Chaudhuri, Founder of MySmartOps commented,

“…Exposing ourselves to greater risks like data loss or adversary gaining infrastructure access.”

Too many organisations are diving head first into AI without first deciding how much risk they’re actually willing or able to tolerate. Interestingly, corporate leadership is still catching on to the reality of what’s really going on here.

For years, security strategy focused on the perimeter. Cloud erased that. Now AI has rewritten the playbook entirely and many organisations are spiralling, unsure what matters anymore or where to focus.

“Supply chain components or software components, plugins that have been poisoned through the API chain and basically enter your development environment,” Chaudhuri warned.

Developer environments have become the prime target. Poisoned plugins, compromised APIs and malicious dependencies slipping quietly into build pipelines and straight into production.

Meanwhile, boards remain fixated on MFA rollouts and phishing simulations, while attackers move upstream exploiting software supply chains and developer tooling that remain largely ungoverned. At the same time, a quieter and more aggressive threat is accelerating, that being machine identities.

“We've seen several breaches in the last few months where non-human identities are being exploited,” Chaudhuri said.

Service accounts, automation tokens and CI/CD credentials now outnumber human users, yet they rarely rotate, steadily accumulate privilege and often persist long after their original purpose disappears. Nearly half of organisations report incidents tied to machine managed identities, but leadership responses remain slow and fragmented.

“You have to treat these machine identities like secrets,” Chaudhuri stressed.

Used deliberately, AI-assisted development can deliver significant strides. But the starting point matters. Legacy applications, low criticality, brittle, undocumented systems are the right testing ground.

“We start really small,” Chaudhuri explained. “Using AI to add code comments, clarifying flow, and gradually adopt AI generated suggestions.”

Organisations are doing the opposite and deploying AI straight into critical systems where failure carries regulatory, financial, or safety consequences. The intent and ambition makes sense, but the execution is frankly poor judgement.

“The real question is how resilient the organisation is when things don’t go as per plan,” Chaudhuri said.

Mature organisations assume failure is possible and prepare for it. Immature ones rely on hope and post-incident explanations. Hope is not a strategy.

Security can no longer be bolted on after deployment which is obvious. However, resilience can’t be retrofitted under pressure. AI has now eliminated the buffer time organisations once relied on to react. Time is to make decisions is running out, and so is the patience of security teams trying to manage demands.

AI will not slow down. Development will not renege to a safer pace. And security teams will not ‘catch up’ without clear executive intent.

“The key to successful adoption is again balancing innovation and agility with the risk comfort mindset,” Chaudhuri commented.

The organisations that win this next phase won’t be the loudest or the fastest. They’ll be the ones that understand their risk, secure what they can’t see and build resilience into how software is created not after it breaks.

Share This