Introduction
Founder, President and CEO of Genetec Pierre Racz recently delivered a keynote in Genetec’s HQ in Montreal about the future of artificial intelligence, arguing the industry is barreling toward risk, misinformation and accountability failures unless leaders change course now.
According to Racz, AI doesn’t actually understand anything.
“It’s not intelligent, it’s a mindless mapping,” said Pierre Racz.
The CEO framed the entire security industry around a single responsibility which is truth.
“People may want AI, but people need the truth.”
He warned that as artificial intelligence systems flood the world with content, truth is not rising to the top, it’s getting buried and harder for people to discern information nowadays.
Unlike carefully researched reporting, misinformation is cheap, fast and easy to produce at scale.
“If you flood the world with more and more information, the truth will not float up, it will sink to the bottom.”
Racz responded to real world consequences already unfolding, from legal losses to dangerous decision making errors.
One example was about a chatbot giving incorrect refund advice that ultimately cost an airline a Supreme Court case.
Air Canada was forced to pay damages after its chatbot gave a customer incorrect refund advice. The passenger relied on the bot’s guidance, only to be denied reimbursement later, prompting a legal challenge.
The tribunal ruled against the airline, making it clear that companies are in fact responsible for what their AI systems tell customers. The defence that the chatbot was ‘separate; didn’t hold.
The case has quickly become an instance that automation doesn’t remove accountability, it amplifies it.
Two’s Company
Another was in a healthcare AI system recommending medication that should never have been prescribed.
“These technologies are of marginal reliability… it’s important to engineer them so the impact of failure is not super costly.”
The issue isn’t whether AI fails, it’s how catastrophic those failures become.
Racz called for direct accountability at the top.
“If you provide technology that returns factually incorrect information… the executives should be held liable.”
He compared the situation to corporate scandals that led to strict financial regulations, arguing AI needs similar consequences.
“If you don’t understand it, find someone who can… otherwise you’re too stupid to be an executive.”
Beyond technical flaws, Racz highlighted a deeper concern around how AI systems are shaping human behaviour.
“They’re very good at keeping our attention by getting us mad.”
He pointed to emerging research showing that platforms reward extreme behaviour and low quality content, because it drives engagement. This now means we’re in a feedback loop where outrage outperforms accuracy.
A Balancing Act
Racz also took a shot at what he sees as a dangerous economic imbalance, that tech companies chasing upside while trying to offload downside risk.
“If you have the probability of winning, you should have the probability of losing money.”
He criticised calls for government bailouts if AI systems fail, warning it creates a ‘moral hazard’ that encourages reckless innovation.
While the AI boom continues, Racz cautioned that history tells a different story.
From failed ‘AI winters’ to overhyped breakthroughs that collapsed under scrutiny, he sees a familiar pattern repeating.
“We’ve been through this cycle before… peak of expectations, then disillusionment.”
Conclusion
For the Founder, the future of AI isn’t about chasing hype it’s about building systems that can be trusted under pressure. Because when these systems fail, the consequences have consequences, which are legal, financial and human.
“It ain’t what you don’t know that gets you into trouble… it’s what you know for sure that ain’t so.”









