Is AI Under Fire? Genetec CEO Warns of Dangerous Tech Failures
Posted: Friday, May 01
  • KBI.Media
  • $
  • Is AI Under Fire? Genetec CEO Warns of Dangerous Tech Failures
Karissa Breen, more commonly known as KB, is crowned a LinkedIn ‘Top Voice in Technology’, and widely recognised across the global cybersecurity industry. A serial entrepreneur, she is the co-founder of the TMFE Group, a portfolio of cybersecurity-focused businesses spanning an industry-leading media platform, a specialist marketing agency, a content production studio, and the executive headhunting firm, MercSec. Now based in the United States, KB oversees US editorial operations and leads the expansion of the group’s media footprint across North America, while maintaining a strong presence in Australia, and the broader global market. She is the former Producer and Host of the streaming show 2Fa.tv, and currently sits at the helm of journalism for the group’s flagship arm, KBI.Media, the independent cybersecurity media company. As a cybersecurity investigative journalist, KB hosts her globally-renowned podcast, KBKast, where she interviews leading cybersecurity practitioners, CISOs, government officials including heads-of-state, and industry pioneers from around the world. The podcast has been downloaded in over 65 countries with more than 400,000 global downloads, influencing billions of dollars in cybersecurity budgets. KB is known for asking the hard questions and extracting real, commercially relevant insights. Her approach provides an uncoloured, strategic lens on the evolving cybersecurity landscape, demystifying complex security issues and translating them into practical intelligence for executives navigating risk, regulation, and rapid technological change.

i 3 Table of Contents

Is AI Under Fire? Genetec CEO Warns of Dangerous Tech Failures

Introduction

​Founder, President and CEO of Genetec Pierre Racz recently delivered a keynote in Genetec’s HQ in Montreal about the future of artificial intelligence, arguing the industry is barreling toward risk, misinformation and accountability failures unless leaders change course now.

According to Racz, AI doesn’t actually understand anything.

“It’s not intelligent, it’s a mindless mapping,” said Pierre Racz.

The CEO framed the entire security industry around a single responsibility which is truth.

“People may want AI, but people need the truth.”

He warned that as artificial intelligence systems flood the world with content, truth is not rising to the top, it’s getting buried and harder for people to discern information nowadays.

Unlike carefully researched reporting, misinformation is cheap, fast and easy to produce at scale.

“If you flood the world with more and more information, the truth will not float up, it will sink to the bottom.”

Racz responded to real world consequences already unfolding, from legal losses to dangerous decision making errors.

One example was about a chatbot giving incorrect refund advice that ultimately cost an airline a Supreme Court case.

Air Canada was forced to pay damages after its chatbot gave a customer incorrect refund advice. The passenger relied on the bot’s guidance, only to be denied reimbursement later, prompting a legal challenge.

The tribunal ruled against the airline, making it clear that companies are in fact responsible for what their AI systems tell customers. The defence that the chatbot was ‘separate; didn’t hold.

The case has quickly become an instance that automation doesn’t remove accountability, it amplifies it.

Two’s Company

Another was in a healthcare AI system recommending medication that should never have been prescribed.

“These technologies are of marginal reliability… it’s important to engineer them so the impact of failure is not super costly.”

The issue isn’t whether AI fails, it’s how catastrophic those failures become.

Racz called for direct accountability at the top.

“If you provide technology that returns factually incorrect information… the executives should be held liable.”

He compared the situation to corporate scandals that led to strict financial regulations, arguing AI needs similar consequences.

“If you don’t understand it, find someone who can… otherwise you’re too stupid to be an executive.”

Beyond technical flaws, Racz highlighted a deeper concern around how AI systems are shaping human behaviour.

“They’re very good at keeping our attention by getting us mad.”

He pointed to emerging research showing that platforms reward extreme behaviour and low quality content, because it drives engagement. This now means we’re in a feedback loop where outrage outperforms accuracy.

A Balancing Act

Racz also took a shot at what he sees as a dangerous economic imbalance, that tech companies chasing upside while trying to offload downside risk.

“If you have the probability of winning, you should have the probability of losing money.”

He criticised calls for government bailouts if AI systems fail, warning it creates a ‘moral hazard’ that encourages reckless innovation.

While the AI boom continues, Racz cautioned that history tells a different story.

From failed ‘AI winters’ to overhyped breakthroughs that collapsed under scrutiny, he sees a familiar pattern repeating.

“We’ve been through this cycle before… peak of expectations, then disillusionment.”

Conclusion

For the Founder, the future of AI isn’t about chasing hype it’s about building systems that can be trusted under pressure. Because when these systems fail, the consequences have consequences, which are legal, financial and human.

“It ain’t what you don’t know that gets you into trouble… it’s what you know for sure that ain’t so.”

Share This