Google Threat Intelligence Group (GTIG) has warned that threat actors are now operationalising AI at a scale and sophistication that is rapidly reshaping the cyber threat landscape, with state-backed groups and cybercriminals using large language models to accelerate exploit development, malware creation, reconnaissance and influence operations.
The latest GTIG AI Threat Tracker report details how actors linked to China, North Korea and Russia are increasingly embedding AI into offensive cyber workflows, moving beyond experimentation and into what Google describes as “industrial-scale application” of generative AI.
Among the report’s most significant findings is what GTIG believes may be the first observed AI-assisted zero-day exploit developed by cybercriminals. Researchers identified a vulnerability exploitation campaign involving a two-factor authentication bypass flaw that they assess was likely discovered and weaponised with AI support.
GTIG said frontier LLMs are increasingly capable of identifying higher-level semantic logic flaws that conventional scanners often fail to detect, marking a potentially significant shift in how vulnerabilities may be discovered in future.
The report also highlights how AI is now being used to improve malware evasion and operational resilience. Researchers observed PRC-linked threat actor APT27 leveraging Gemini to accelerate development of infrastructure management tooling likely associated with operational relay box (ORB) networks used to obfuscate attack origins.
Meanwhile, Russia-linked actors targeting Ukrainian organisations were found deploying malware families including CANFAIL and LONGSTREAM containing large amounts of AI-generated decoy code designed to disguise malicious behaviour and frustrate defenders.
GTIG also detailed analysis of PROMPTSPY, an Android backdoor integrating Gemini into autonomous malware operations. According to the report, PROMPTSPY can analyse device interfaces, generate commands and autonomously interact with infected devices with minimal human oversight.
Researchers said the malware can also replay biometric authentication gestures, dynamically rotate infrastructure and maintain persistence even if parts of its command-and-control infrastructure are disrupted.
Beyond malware, GTIG warned threat actors are increasingly leveraging AI for reconnaissance and phishing. Researchers observed actors using LLMs to map organisational hierarchies, identify high-value targets and generate more convincing phishing lures tailored to enterprise environments.
The report also highlights the rise of “agentic” offensive frameworks, where AI systems move beyond passive assistance and begin autonomously orchestrating reconnaissance and vulnerability validation tasks. GTIG linked some of this activity to suspected PRC-nexus operations targeting organisations across Asia.
Outside traditional intrusion activity, GTIG identified growing use of AI-generated media in information operations, including suspected AI voice-cloning activity linked to pro-Russia campaign “Operation Overload”.
The report also warns that threat actors are industrialising access to frontier AI models using proxy relays, automated registration pipelines and account pooling infrastructure designed to bypass safety guardrails and account restrictions.
At the same time, AI ecosystems themselves are becoming a growing target. GTIG documented malicious OpenClaw skills capable of executing unauthorised commands, alongside broader supply chain attacks affecting projects including LiteLLM, BerriAI and associated GitHub repositories.
Google said the findings reinforce the need for stronger AI security standards and highlighted defensive initiatives including Big Sleep, an AI-powered vulnerability discovery agent, and CodeMender, an experimental system designed to automatically patch software vulnerabilities.




