IT security leaders across ANZ expect AI agents to be beneficial, yet most see significant readiness gaps in deploying proper safeguards
As AI adoption accelerates and cyber threats increase, 75 per cent of Australian and New Zealand (ANZ) IT security leaders recognise their security practices need transformation. New data from the latest Salesforce State of IT survey also reveals unanimous optimism about AI agents, with 100% of security leaders – both in ANZ and globally – identifying at least one security concern that could be improved by agents.
Despite this hope, the survey of over 2,000 enterprise IT security leaders – including 100 in ANZ – highlights significant implementation challenges ahead. In ANZ, 58 percent worry their data foundation isn’t set up to get the most out of agentic AI, while another 58 per cent aren’t fully confident they have appropriate guardrails to deploy AI agents.
But this hasn’t stopped ANZ organisations from using agents for IT security; 36 per cent of local teams already integrate agents in their day-to-day operations, a figure that’s anticipated to nearly double to 68 per cent over the next two years. IT security leaders expect a range of benefits as their use of agents ramps up, ranging from threat detection to sophisticated auditing of AI model performance
Salesforce perspective:
“Trusted AI agents are built on trusted data. IT security teams that prioritise data governance will be able to augment their security capabilities with agents while protecting data and staying compliant.” – Alice Steinglass, EVP & GM, Salesforce Platform, Integration, and Automation
Why it matters: Both the professionals charged with protecting a company’s data and systems and the bad actors looking to exploit vulnerabilities are increasingly adding AI to their toolkits. Autonomous AI agents, which help security teams cut down on manual work, can free up humans’ time for more complex problem solving. However, agentic AI deployments require robust data infrastructure and governance to be successful.
Security budgets ramp up as threats evolve
In addition to a familiar slate of risks like cloud security threats, malware, and phishing attacks, IT leaders now cite data poisoning — in which malicious actors compromise AI training data sets — among their top three concerns alongside cloud security threats and insider/internal threats.
Resources are rising in response: 71 per cent of ANZ organisations expect to boost security budgets in the coming year, slightly below the global average of 75 per cent.
Complex regulatory environments add a wrinkle to AI implementation
While 74 per cent of ANZ IT security leaders believe AI agents offer compliance opportunities, such as by improving adherence to privacy laws, 83 per cent say they also present compliance challenges. This may stem in part from an increasingly complex and evolving regulatory environment across industries, and is hampered by compliance processes that remain largely unautomated and prone to error.
This complexity extends to the implementation of AI and automation. Only 48 per cent of ANZ IT security leaders are fully confident they can deploy AI agents in compliance with regulations and standards, while 85 per cent of organisations say they haven’t fully automated their compliance processes.
In addition to enhancing their data foundations for the AI era, more than half of the teams say they need to improve their overall security and compliance practices. In Australia and New Zealand, 61 per cent think their security and compliance practices are ready for AI agent development and use.
Data governance is a linchpin in enterprises’ agentic evolution
More than half of IT security leaders in ANZ doubt they have quality data needed for AI or the right setup for deployment. However, progress is being made. A recent global survey of CIOs found that budgets for data infrastructure and management are four times higher than those for AI, indicating that organisations are laying the necessary groundwork for broader implementation
Trust is a cornerstone of successful AI, yet confidence is nascent
A recent consumer study found that trust in companies is on a decline, with three-quarters of Australian consumers saying they trust companies less than they did a year ago. Additionally, sixty-nine per cent believe that advances in AI make trust even more important.
IT security leaders across ANZ highlighted key areas where efforts are needed to earn trust:
- 59% haven’t perfected their ethical guidelines for AI use.
- 68% aren’t fully confident in the accuracy or explainability of their AI outputs.
- 58% don’t provide full transparency into how customer data is used in AI.
The customer view:
The customer view: Arizona State University (ASU) is among the first universities to leverage Agentforce, Salesforce’s digital labor platform for augmenting teams with trusted autonomous AI agents in the flow of work. ASU stresses the need for data relevancy, especially as the university advances its AI initiatives. ASU implemented Salesforce-acquired Own backup, recovery, and archiving solutions, providing ASU with a comprehensive approach to data management, addressing their needs for backup, recovery, compliance, and innovation support.
Methodology:
Data is sourced from a security, privacy, and compliance leader segment of a double-anonymous survey of IT decision-makers conducted from December 24, 2024 through February 3, 2025. Respondents represented Australia, Belgium, Brazil, Canada, Denmark, Finland, France, Germany, India, Indonesia, Ireland, Israel, Italy, Japan, Mexico, the Netherlands, New Zealand, Norway, Portugal, Singapore, South Korea, Spain, Sweden, Switzerland, Thailand, the United Arab Emirates, the United Kingdom, and the United States. 100 respondents were from Australia and New Zealand.