Despite soaring use of artificial intelligence (AI) in the workplace, most organisations remain critically unprepared to manage its risks, according to ISACA’s annual AI Pulse Poll, which surveyed 3,029 digital trust professionals worldwide.
The 2025 poll reveals that 81 percent of respondents believe employees within their organisation use AI, whether it is permitted or not, yet only 28 percent of organisations have a formal AI policy.
Concerningly, just 22 percent of organisations provide AI training to all staff; while 89 percent of tech professionals say they will need AI training within the next two years to advance their careers or even keep their current roles.
The disconnect between widespread AI adoption and lagging oversight is creating growing risk, particularly in the face of escalating threats like deepfakes. In fact, 66 percent of professionals expect deepfake cyberattacks to become more sophisticated within the next 12 months, yet just 21 percent of organisations are currently investing in tools to detect or mitigate them.
Jamie Norton, Board Director, ISACA, said as employees embrace AI tools to boost efficiency, the absence of formal policies and AI-specific cybersecurity measures leaves organisations increasingly vulnerable to manipulation, reputational harm and data breaches.
“AI is already embedded in daily workflows, but ISACA’s poll confirms governance, policy and risk oversight are significantly lacking,” said Mr Norton. “A security workforce skilled in AI is absolutely critical to tackling the wide range of risks AI brings, from misinformation and deepfakes to data misuse.
“AI isn’t just a technical tool, it’s changing how decisions are made, how data is used and how people interact with information. Leaders must act now to establish the frameworks, safeguards and training needed to support responsible AI use.”
AI Use Booming, Despite Policies, Training Lacking
Sixty-eight percent of respondents say that the use of AI has resulted in time savings for them and their organisation, and more than half (56 percent) believe that AI will have a positive impact on their career in the next year. The technology is being used in a range of ways, including:
- To create written content (52 percent)
- To increase productivity (51 percent)
- To automate repetitive tasks (40 percent)
- Analysing large amounts of data (38 percent)
- Customer service (33 percent)
While strides have been made in AI policies and training, they still have a way to go. Only 28 percent of organisations have a formal, comprehensive policy in place for AI (up from 15 percent last year). Though 59 percent of organisations say they permit the use of generative AI (up from 42 percent last year), 32 percent of respondents say there is no AI training provided to any employees, 35 percent provide training only to those in IT-related positions, and only 22 percent train all employees.
Also, while many are using AI, they may not all fully understand it, 56 percent say they are just somewhat familiar with AI, with only 6 percent saying they are extremely familiar and 28 percent consider themselves to be very familiar.
AI Risks but Lack of Action
Sixty-one percent are very or extremely worried that generative AI will be exploited by bad actors, and 59 percent believe that AI-powered phishing and social engineering attacks are now more difficult to detect.
Additionally, only 41 percent believe organisations are adequately addressing ethical concerns in AI deployment, such as data privacy, bias and accountability. And only 30 percent have a high degree of confidence in their ability to detect AI-related misinformation.
Only 42 percent of respondents say AI risks are an immediate priority for their organisation, including these top risks they cited:
- Misinformation/disinformation (80 percent)
- Privacy violations (69 percent)
- Social engineering (63 percent)
- Loss of IP (53 percent)
- Job displacement (40 percent)
“Enterprises urgently need to foster a culture of continuous learning and prioritise robust AI policies and training in AI, to ensure they are equipping their employees with the necessary expertise to leverage these technologies responsibly and effectively—unlocking the AI’s full potential,” says Jason Lau, ISACA board director and CISO, Crypto.com. “It is just as important for organisations to make a deliberate shift to integrate AI into their security strategies—threat actors already are doing so, and failing to keep pace will expose organisations to escalating risks.”
AI Skills, Training Increasingly Essential
Respondents, however, recognise the vital importance of AI skills. Nearly a third say that organisations are increasing jobs for AI-related functions in the next 12 months, and 85 percent of respondents agree or strongly agree that many jobs will be modified due to AI.
While 84 percent of digital trust professionals consider themselves to have just a beginner or intermediate level of expertise in AI, 72 percent believe that AI skills are very or extremely important for professionals in their field right now. Eighty-nine percent say they will need AI training within the next two years to advance their careers or even keep their current roles, and 45 percent say it is needed within the next six months.
AI Guidance, Resources
Access the pulse poll and related resources at www.isaca.org/ai-pulse-poll.
ISACA offers a range of other AI resources, including the Artificial Intelligence Audit Toolkit and several courses—including AI Fundamentals, AI Governance, and AI Threat Landscape. ISACA has also recently released its new Advanced in AI Audit (AAIA) certification—a first-of-its-kind certification that can be earned by CISAs, CPAs and CIAs— and will be launching its Advanced in AI Security Management (AAISM) certification, which can be earned by CISMs and CISSPs, in August.