Artificial Intelligence (AI) continues to redefine technology’s place in organisations, promising unmatched opportunities, alongside uncertainty and risk. At Cisco Live! 2025, the launch of a new report, “Turning Hesitation into Action: How Risk Leaders Can Unlock AI’s Potential” set out to pierce the fog surrounding AI adoption and governance in Australia. The session, jointly presented by Cisco and the Governance Institute of Australia, offered valuable insights for executives charged with balancing digital transformation against the essential need for robust risk management.
Reframing the AI Conversation, Why Risk Leadership Matters
Kicking off the session, Cori Moran, Director of Communications for Cisco in Australia and New Zealand, framed the discussion’s unique angle: this wasn’t to be the usual debate among CIOs, CTOs, or CSOs, but a purposeful shift towards recognising the function of the Chief Risk Officers (CROs) and their increasingly important mandate. The event itself marked the official launch of Cisco’s collaborative report with the Governance Institute, an effort specifically aimed at equipping risk leaders to steer AI strategy from hesitation to pragmatic, actionable adoption.
The report was made available, promising a deeper dive for delegated attendees and remote viewers. Author and Lead researcher, Brad Howarth took the reins, initiating a thoughtful exploration into the persistent gulf between technologists and organisational leaders, particularly at the board level. This divide, Howarth noted, has endured over two decades of digital disruption, a gap that AI, with its transformative power, can both widen or help bridge.
Bridging Australia’s AI Governance Gap
The findings were clear: Australia lags behind peer economies in AI readiness, especially in governance. Carl Solder, Cisco’s CTO for Australia and New Zealand, described the underlying research, recounting regional surveys from 2023 and 2024 which highlighted governance as a pivotal stumbling block. This suggested that the urgency wasn’t just about deploying technology, but about developing organisational structures, mindsets, and policies that support responsible AI integration. Partnering with the Governance Institute was a strategic move, drawing on members’ expertise, risk governance professionals, company secretaries, and directors, to uncover root causes and practical remedies.
Daniel Popovski, representing the Governance Institute, expanded this context with alarming statistics: over 64% of Australian organisations reported having zero AI training, and 93% admitted they couldn’t quantify ROI on their AI investments. This wasn’t just a technological gap; it was a structural challenge threatening to leave small businesses (98% of Australian enterprise) trailing far behind large corporates. Notably, uneven adoption and shadow AI use emerged as critical risks, with smaller businesses and not-for-profits especially vulnerable.
Policy Uncertainty and the Ripple Effect
Policy indecision compounds these risks. Popovski highlighted the fluctuating stance of Australia’s regulators, swinging from voluntary “guardrails” to more ambiguous policy directions. That uncertainty breeds caution among company boards and risk professionals, and holds back critical investment. The session’s panel weighed international models, comparing Europe’s risk-based approaches against the U.S. “innovate first” ethos. For Australia, decisive, risk-proportionate policy is essential to unlock both business confidence and board-level engagement with AI’s opportunities and dangers.
These governance and economic contexts converge with ethical imperatives. The Governance Institute’s annual Ethics Index has positioned AI as the second most challenging future development facing Australian organisations. The risks extend well beyond the financial, touching on organisational reputation, social responsibility, and even existential threats to business continuity.
Risk Managers, From Gatekeepers to Enablers
Yet hesitancy comes with its own costs. Panelists emphasised a “risk and reward” paradigm, underscoring that the failure to act is itself a profound risk. As Popovski put it, “The right to be in business tomorrow is at stake…” if risk managers and boards do not proactively embed AI into their strategic frameworks.
The evolving role of the risk manager is thus crucial. It’s no longer sufficient to be the “Department of No,” said David Siroky, Cisco’s AI lead, especially in a world where AI products like ChatGPT reached a billion users within a year (00:17:53). Instead, risk leaders must negotiate a “path to yes,” formulating controls and safeguards that permit intelligent, safe AI adoption, rather than blanket resistance.
Challenges and Opportunities,Direct from the Roundtable
In a series of roundtables, Cisco and the Governance Institute convened risk professionals across banking, insurance, higher education, retail, and the not-for-profit sector. The pattern was striking: the traditional “wait and see” approach won’t work with AI. The pace of change is simply too fast, and sitting still risks organisational obsolescence (00:16:00).
Participants identified three headline risk categories for “doing AI”: ethical and reputational (erroneous outputs, hallucinations), operational and strategic (uncertain ROI), and security/privacy (data leakage, new attack surfaces). Yet, as was explained, the risk of not doing is even greater. Those who delay face competitive disadvantage as rivals harness AI to streamline operations, enhance customer experience, and ultimately reshape industry standards.
Six Recommendations for Safe, Effective AI Adoption
The heart of the panel lay in its practical recommendations, spearheaded by Siroky;
- Build General AI Knowledge: Not just technical, but legal, regulatory, and industry-relevant understanding is essential for all risk officers.
- Create Interdisciplinary Teams: AI’s impact is pervasive; HR, legal, IT, and business operations must contribute together.
- Position AI as a Business Enabler: Move from discrete AI strategies to embedding AI within the overall business strategy.
- Implement Appropriate Controls: Ensure oversight, auditability, and reproducibility, manage both technology and process risks.
- Raise Awareness Organisation-wide: Democratise AI knowledge through sandboxes, hackathons, and cross-functional training. Invite bottom-up innovation.
- Measure Holistically: Capture ROI, but also learn from failed projects; share lessons to inform future implementation.
These recommendations aren’t just abstract best practice. They’re the direct result of what Australian risk professionals say they are already doing, or urgently need to start.
Barriers, Education, and the Road Ahead
Despite clear guidance, obstacles remain. A discussion from the floor challenged why organisations seem stuck with “101-level” governance for each new technology, cloud, quantum, and now AI (00:26:30). Panelists acknowledged the problem: education and toolsets simply haven’t kept pace with AI’s breakneck speed of adoption. Even seasoned risk professionals are still learning what’s possible, and what’s dangerous.
Popovski further differentiated AI from other technologies by pointing to its autonomous decision-making capability, which amplifies legal and ethical uncertainty. The call is to move towards holistic, business-wide strategies where risk, legal, and technical roles mesh seamlessly for effective governance.
Supporting Small Businesses and Democratic AI Knowledge
The session closed with a pragmatic question from the audience. “How can the 98% of Australian businesses that are small, often lacking risk specialists or CTOs, access quality advice?” Popovski recommended new resources produced with the National AI Centre, including practical guides for both small and large enterprises. Government partnership and industry collaboration are key to equitable AI adoption.
Risk Managers as Drivers of Safe Acceleration
So, what is the ultimate role of the risk officer? The panel’s clear answer, to ensure their organisation can safely accelerate AI adoption. Safety remains the north star, encompassing ethical, operational, financial, and technical domains. The imperative is not simply to hedge against the new, but to steward transformative change, bridging the gap between risk and reward, between governance and innovation, and between today’s hesitancy and tomorrow’s action.
This panel report from Cisco Live! 2025 is more than a snapshot of current risk discourse, it’s a roadmap for organisations seeking to harness AI’s promise with confidence, clarity, and care. And it places risk leaders exactly where they belong: at the center of the future, not merely managing danger, but unlocking potential.
Panel Discussion Memebers:
Brad Howarth, Report Author & Panel Moderator
Carl Solder, CTO, Cisco ANZ
David Siroky, Head of AI, Cisco ANZ
Daniel Popovski, AI, Cyber and Tech Policy and Advocacy Lead, at Governance Institute of Australia





