SYDNEY, Australia – 26 March 2026 – Global AI security leader TrendAI™ has published new research revealing that organisations worldwide are pushing ahead with AI deployment despite known security and compliance risks.
The new global study* of 3,700 business and IT decision makers from 23 countries including Australia, found that 67% have felt pressured to approve AI despite security concerns, with almost one in five Australian respondents (19%) describing those concerns as “extreme” but overridden to keep pace with competitors and internal demand.
Rachel Jin, Chief Platform & Business Officer, Head of TrendAI: “Organisations are not lacking awareness of risk, they’re lacking the conditions to manage it. When deployment is driven by competitive pressure rather than governance maturity, you create a situation where AI is embedded into critical systems without the controls needed to manage it safely. This research reinforces our focus on helping organisations drive solid business outcomes with AI while still managing business risk.”
The study also found the risk of pressure-driven AI rollout is exacerbated by governance inconsistencies and unclear responsibility for AI risk that is becoming widespread. The same is true for security teams working on a reactive basis to top-down AI rollout decisions, which often leads to workarounds and increased use of unsanctioned or “shadow” AI tools.
Recent TrendAI™ threat research reinforces this shift, showing how attackers are already using AI to automate reconnaissance, accelerate phishing campaigns and lower the barrier to entry for cybercrime, increasing both the speed and scale of attacks.
AI adoption is outpacing control in Australia
Australian organisations represented in the study are deploying AI faster than they can manage the associated risks, creating a widening gap between ambition and oversight. 68% say AI is advancing more quickly than they can secure it, while 44% of senior business decision makers report only moderate understanding of legal frameworks governing AI.
Almost two-thirds of Australian organisations (64%) have comprehensive AI policies in place, however more than 40% report that unclear regulation or compliance standards and lack of internal policy and governance remain key barriers to safe AI adoption. In practice, governance maturity is low with AI often operationalised before the rules governing its use are fully established.
Srujan Talakokkula, Managing Director ANZ of TrendAI, “While many organisations across Australia and New Zealand report strong confidence in AI preparedness and strong recognition of AI’s role in combating AI-driven threats, there is a clear gap in understanding of legal frameworks governing AI and differing views on accountability and human oversight across both business and IT leadership.
“With governance challenges intensifying and AI-driven threats becoming more sophisticated, visibility of assets and risk management across the entire AI lifecycle is critical. This research highlights the importance of working with trusted partners that allow organisations to safely deploy and scale AI.”
Trust in autonomous AI remains uncertain
Confidence in more advanced, autonomous systems is still in the maturing phase globally. Less than half (44%) believe agentic AI will significantly improve cyber defence in the short term, with ongoing concerns around data access, misuse and lack of oversight.
Australian data shows where those concerns are landing. Almost half of all respondents (45%) say AI agents accessing sensitive data is their biggest risk. Over a third (34%) see risks from autonomous code deployment, while almost one in three (31%) fear abuse of trusted AI status and hallucinations or false outputs (30%).
At the same time, nearly a third (31%) of business decision makers globally, admit they lack observability or auditability over these systems, raising serious questions about how organisations can control or intervene once agents are deployed.
Up to 54% of Australian respondents support the introduction of AI “kill switch” mechanisms to shut down systems in the event of failure or misuse, while nearly half remain unsure. Additionally, less than half of business decision makers in Australia (42%) believe a human should always remain in the loop when it comes to AI-driven security operations. This lack of consensus highlights a deeper issue. Organisations are moving towards autonomous AI without agreement on how to retain control when it matters most.
“Agentic AI is moving organisations into a new risk category,” added Rachel Jin. “Our research shows the concerns are already clear, from sensitive data exposure to loss of oversight. Without visibility and control, organisations are deploying systems they don’t fully understand or govern, and that risk is only going to increase unless action is taken.”
To read the full report, please visit: TrendAI™ Global AI Study




