Rushing AI Adoption? Security Leaders Say to Slow Down
Posted: Wednesday, Feb 18
  • KBI.Media
  • $
  • Rushing AI Adoption? Security Leaders Say to Slow Down
Karissa Breen, crowned a LinkedIn ‘Top Voice in Technology’, is more commonly known as KB, and widely known across the cybersecurity industry. A serial Entrepreneur and co-founder of the TMFE Group, a holding company and consortium of several businesses all relating to cybersecurity. These include an industry-leading media platform, a marketing agency, a content production studio, and the executive headhunting firm, MercSec. She is also the former Producer and Host of the streaming show, 2Fa.tv. Our flagship arm, KBI.Media, is an independent and agnostic global cyber security media company led by KB at the helm of the journalism division. As a Cybersecurity Investigative Journalist, KB hosts her renowned podcast, KBKast, interviewing cybersecurity practitioners around the globe on security and the problems business executives face. It has been downloaded in 65 countries with more than 300K downloads globally, influencing billions of dollars in cyber budgets. KB is known for asking the hard questions and getting real answers from her guests, providing a unique, uncoloured position on the always evolving landscape of cybersecurity. She sits down with the top experts to demystify the world of cybersecurity, and provide genuine insight to executives on the downstream impacts cybersecurity advancement and events have on our wider world.

i 3 Table of Contents

Rushing AI Adoption? Security Leaders Say to Slow Down

Artificial intelligence isn’t just transforming business. It’s reshaping risk.

At the recent Genetec Press Summit in Montréal, Manager and Principal Security Architect, Mathieu Chevalier delivered a presentation that the promise of AI is real, but so are the vulnerabilities.

“It feels like everyone is talking about [AI], and with good reason… this technology shows a lot of promise. The thing is that with high promise comes high risk as well.”

Enterprises are racing to operationalise AI. Some faster than others. Moving fast is good in this economy and the breaking of things can be even faster.

Chevalier pointed to an incident involving Coinbase, where leadership aggressively pushed AI adoption internally. The result? Internal friction and fallout.

“The CEO is a big AI supporter, so he asked its staff to use AI right now… At the end of the week, they were let go because they were not aligned with what the CEO wanted. This is pretty intense.”

At Genetec, the approach is more measured.

“We know that good things take time and that you have to understand something in order to use it properly.”

AI strategy without governance is a liability, plus a slippery slope.

While headlines can focus on AI productivity gains, attackers are studying something else, how to manipulate large language models. This is where prompt injection enters the room.

“Prompt injection vulnerability occurs when an attacker manipulates an LLM causing it to unknowingly execute the attacker's intention.”

Chevalier demonstrated via Gandalf by Lakera on how relatively simple prompts can override safeguards, extract sensitive data, or redirect system behaviour.

“In my opinion, it’s simpler than hacking traditional IT systems… Feels more like you’re fooling a child at a mind game.”

AI doesn’t always need to be ‘breached’ in the traditional sense, it can be socially engineered at scale – even Gandalf, as what was discovered, can be fooled.

Beyond enterprise risk, Chevalier warned about broader threats, particularly deepfakes.

“Deepfakes are an important danger, an important risk that as a society we face today.”

Even protective mechanisms like watermarking can be bypassed using other AI tools. So basically, you use an AI tool to create an image with a watermark to then use another AI tool to remove that watermark. Sometimes it’s easy to spot, and other times it can be way harder to discern, even with a watermark.

He also referenced AI misalignment scenarios, which included documented cases where systems under stress exhibited manipulative behaviour.

“AI system resorts to blackmail if told it will be removed… this shows you the risk of AI misalignment.”

For all the futuristic concerns, Chevalier brought the conversation back to fundamentals.

“You want many layers of defence… If one layer fails, then you have a second layer and maybe you have a third layer behind it.”

AI systems, he reiterated, are still software at the end of the day.

“AI system is software, right? So it’s not something magical… If it’s software, it means that classic application security techniques would still work here.”

In other words, don’t abandon proven security principles in the rush to adopt emerging technology, despite competitors doing so. Sometimes it can be at the detriment of good security hygiene.

Share This