Australia’s top cybersecurity leaders are stepping into what some are calling the next “wild west” of technology. Agentic AI – where speed is everything and the rules are still being written in real time.
While Europe tightens the bolts with strict AI laws and the United States leans on broader frameworks, Australia is carving out its own path, one of which that some may say leaves too much open to interpretation.
Mandy Andress, CISO at Elastic, describes the country’s approach as sitting “in the middle,” offering high level guardrails without heavy handed regulation. But that flexibility comes at a cost. It’s hard to know what path is best, as it’s still early days. More data, good and bad will inform where this path will go – but each country has its non-negotiables. Some are more willing to take a bet than others.
“There is no playbook,” she warned, noting that whatever strategy companies adopt today could be obsolete within months as the technology grows at lightening speed.
Inside security teams, there’s definitely more pressure to move fast and innovate, or slow down and secure the foundations.
According to Andress, most CISOs are doubling down on core security principles. This means controlling what AI agents can access and ensuring visibility into how they make decisions autonomously.
“Identity is the control plane of AI,” Andress added, warning that the explosion of machine identities is creating a massive new attack surface. Organisations are already struggling to manage this on the day to day.
Layering security controls might seem like the obvious answer and fix, but it risks undermining AI’s biggest selling point, which is ultimately speed and efficiency.
Security leaders now face a tough conundrum and the million dollar question around how do you lock down AI without slowing it down? The answer is still be written and time really will tell.
On the plus side, some organisations are already experimenting with AI policing itself, which means deploying agents to monitor other agents in an attempt to scale oversight without human bottlenecks.
The threat landscape is about to get worse, before it gets any better. This has been ranging from hyper targeted phishing campaigns to autonomous hacking tools, attackers are already weaponising AI at speed.
“It’s going to get worse before it gets better,” the cybersecurity executive said.
Still, there’s a long term upside. As defensive tools catch up, Andress believes security teams will eventually operate at machine speed, turning the proverbial tables on adversaries.
One day, Andress predicts, today’s challenges will be viewed as cybersecurity’s “dark ages.” When those days will come, its still unsure.
Organisations that are willing to move faster may gain a competitive edge but they’ll also absorb more risk. Those that wait for certainty could be left behind, which has been already proven. Large embedded companies have already been overturned by even smaller operations leveraging AI on steroids.
“You can prove what was created yesterday was safe and secure, but there’s been an evolution a week later that now it’s no longer.” Andress noted, highlighting the impossible balancing act leaders now face.
Looking ahead, Andress expects 2026 to be defined by experimentation, but the desire for agents is definitely heating up. Organisations will be testing AI agents across everything from software development to personal productivity.








