Plus, 70% say adversaries are using AI as successfully – or better – than digital trust pros
Posted: Monday, Nov 13
  • KBI.Media
  • $

Plus, 70% say adversaries are using AI as successfully – or better – than digital trust pros


Sydney, Australia (13 November 2023)— A new poll of global digital trust professionals reveals high employee usage of generative Artificial Intelligence (AI) in Australia and New Zealand (63%), few company policies around its use (only 11% have a formal policy), lack of training (80% have no or limited training), and fears around its exploitation by bad actors (97% report being at least somewhat worried), according to Generative AI 2023: An ISACA Pulse Poll.


Employees in ANZ are commonly using AI to create written content (51%), increase productivity (37%), automate repetitive tasks (37%), improve decision making (29%) and provide customer service (20%).


Jo Stewart-Rattray, Oceania Ambassador, ISACA said Australia is in the global spotlight after announcing a strategy to implement six cyber shields to protect the nation’s digital security, and more recently an announcement by Microsoft to significantly invest in Australia’s digital future.


“It is an exciting time to be a digital trust professional in Australia as the opportunity to maximise the vast possibilities presented by AI are significant,” said Ms Stewart-Rattray. “But there is an urgent need to address the inevitable risks AI will generate, without stunting innovation and the benefits this technology brings.


“As employees across the nation increasingly explore AI in the workplace – some initially out of curiosity – organisations must prioritise policies and governance frameworks addressing ethical, privacy and security concerns, to name a few. According to ISACA’s research workplace usage in this region at 63 percent is considerably higher than in other parts of the world at 40 percent.


“What we need to do is put guardrails around the use of AI to ensure the security of corporate data and to ensure there are formal governance guidelines in place,” said Stewart-Rattray.


Diving in, even without policies

The poll found that many employees at respondents’ organisations are using generative AI, even without policies in place for its use. Only 36 per cent of ANZ organisations say their companies expressly permit the use of generative AI (compared to 28 percent globally), only 11 percent say a formal comprehensive policy is in place, and 21 percent say no policy exists and there is no plan for one. Despite this, 63 percent say employees are using it regardless—and the percentage is likely much higher given that an additional 26 percent aren’t sure. 


Lack of familiarity and training 

However, despite employees quickly moving forward with use of the technology, only four percent of respondents’ organisations in ANZ are providing training to all staff on AI, and more than half (57 percent) say that no AI training at all is provided, even to teams directly impacted by AI. Only 32 percent of respondents indicated they have a high degree of familiarity with generative AI.


“Employees are not waiting for permission to explore and leverage generative AI to bring value to their work, and it is clear that their organisations need to catch up in providing policies, guidance and training to ensure the technology is used appropriately and ethically,” said Jason Lau, ISACA board director and CISO at “With greater alignment between employers and their staff around generative AI, organisations will be able to drive increased understanding of the technology among their teams, gain further benefit from AI, and better protect themselves from related risk.”


Risk and exploitation concerns

The poll explored the ethical concerns and risks associated with AI as well, with 38 percent of ANZ respondents saying that not enough attention is being paid to ethical standards for AI implementation. Just over one-third of their organisations consider managing AI risk to be an immediate priority, 32 percent say it is a longer-term priority, and 17 percent say their organisation does not have plans to consider AI risk at the moment, even though respondents note the following as top risks of the technology:


  1. Misinformation/Disinformation (90 percent vs 77 percent globally)
  2. Loss of intellectual property (IP) (68 percent vs 58 percent globally)
  3. Social engineering (65 percent vs 63 percent globally)
  4. Privacy violations (64 percent vs 68 percent globally)
  5. Job displacement and widening of the skills gap (Tied at 35%)

More than half (54 percent) of respondents in Australia and New Zealand indicated they are very or extremely worried about generative AI being exploited by bad actors. Seventy percent say that adversaries are using AI as successfully or more successfully than digital trust professionals. 


“Even digital trust professionals report a low familiarity with AI—a concern as the technology iterates at a pace faster than anything we’ve seen before, with use spreading rampantly in organisations,” said John De Santis, ISACA board chair. “Without good governance, employees can easily share critical intellectual property on these tools without the correct controls in place. It is essential for leaders to get up to speed quickly on the technology’s benefits and risks, and to equip their team members with that knowledge as well.”


Impact on jobs

Examining how current roles are involved with AI, respondents believe that security (47 percent), IT operations (42 percent), and risk and compliance (38 and 32 percent respectively) and the Executive Team (37 percent) are responsible for the safe deployment of AI. When looking ahead, only 12 percent of organisations are opening job roles related to AI-related functions in the next 12 months. Forty percent believe a significant number of jobs will be eliminated due to AI, but digital trust professionals remain optimistic about their own jobs, with 71 percent saying it will have some positive impact for their roles. To realise the positive impact, 75 percent think they will need additional training to retain their job or advance their career. 


Optimism in the face of challenges

Despite the uncertainty and risk surrounding AI, 76 percent of respondents believe AI will have a positive or neutral impact on their industry, 79 percent believe it will have a positive or neutral impact on their organisations, and 85 percent believe it will have a positive or neutral impact on their careers. Eighty-six percent of respondents also say AI is a tool that extends human productivity, and 66 percent believe it will have a positive or neutral impact on society as a whole. 


Learn More 

Read more in the infographic outlining these findings, along with other AI resources, including the AI Fundamentals Certificate, the complimentary The Promise and Peril of the AI Revolution: Managing Risk white paper, and a free guide to AI policy considerations, at


Digital trust professionals from around the globe— working in cybersecurity, IT audit, governance, privacy and risk—weighed in on generative AI, artificial intelligence that can generate text, images and other media—in the new pulse poll from ISACA that explores employee use, training, attention to ethical implementation, risk management, exploitation by adversaries, and impact on jobs. 



ISACA® ( is a global community advancing individuals and organizations in their pursuit of digital trust. For more than 50 years, ISACA has equipped individuals and enterprises with the knowledge, credentials, education, training and community to progress their careers, transform their organizations, and build a more trusted and ethical digital world. ISACA is a global professional association and learning organisation that leverages the expertise of its more than 165,000 members who work in digital trust fields such as information security, governance, assurance, risk, privacy and quality. It has a presence in 188 countries, including 225 chapters worldwide. Through its foundation One In Tech, ISACA supports IT education and career pathways for underresourced and underrepresented populations.



Karen Keech, 

Share This