Trend Micro Incorporated (TYO: 4704; TSE: 4704), a global cybersecurity leader, today urged AI engineers and IT leaders to heed best practices in developing and deploying secure systems, or risk exposure to data theft, poisoning, ransom, and other attacks.
To learn more about how network defenders and adversaries are using AI, read Trend State of AI Security Report, 1H 2025: https://www.trendmicro.com/vinfo/au/security/news/threat-landscape/trend-micro-state-of-ai-security-report-1h-2025
Mick McCluney, ANZ Field CTO at Trend Micro: “AI may represent the opportunity of the century for ANZ businesses. But those rushing in too fast without taking adequate security precautions may end up causing more harm than good. As our report reveals, too much global AI infrastructure is already being built from unsecured and/or unpatched components, creating an open door for threat actors.”
Trend’s report highlights several AI-related security challenges:
- Vulnerabilities/exploits in critical components – Organisations wishing to develop, deploy and use AI applications must leverage multiple specialised software components and frameworks—which may contain vulnerabilities one may find in regular software. The report reveals zero-day vulnerabilities and exploits in core components including ChromaDB, Redis, NVIDIA Triton, and NVIDIA Container Toolkit.
- Accidental exposure to the internet – Vulnerabilities are often the result of rushed development and deployment timelines, as are instances when AI systems are accidentally exposed to the internet—where they can be probed by adversaries. As detailed in the report, Trend has found 200+ ChromaDB servers, 2,000 Redis servers, and 10,000+ Ollama servers exposed to the internet with no authentication.
- Vulnerabilities in open-source components – Many AI frameworks and platforms use open-source software libraries to provide common functionality. However, open-source components often contain vulnerabilities that end up creeping into production systems, where they are hard to detect. In the recent Pwn2Own Berlin, which featured a new AI category, researchers uncovered an exploit for the Redis vector database, which stemmed from an outdated Lua component.
- Container-based weaknesses – A great deal of AI infrastructure runs on containers, meaning it is exposed to the same security vulnerabilities and threats that impact cloud and container environments. As outlined in the report, Pwn2Own researchers were able to uncover an exploit for the NVIDIA Container Toolkit. Organisations should sanitise inputs and monitor runtime behaviour to mitigate such risks.
Stuart MacLellan, CTO, NHS SLAM: “There are still lots of questions around AI models and how they could and should be used. We now get much more information now than we ever did about the visibility of devices and what applications are being used. It’s interesting to collate that data and get dynamic, risk-based alerts on people and what they’re doing depending on policies and processes. That’s going to really empower the decisions that are made organisationally around certain products.”
Both the developer community and its customers must better balance security with time to market, in order to mitigate the risks outlined above. Concrete steps could include:
- Improved patch management and vulnerability scans
- Maintaining an inventory of all software components, including third-party libraries and subsystems
- Container management security best practices, including using minimal base images and runtime security tools
- Configuration checks to ensure AI infrastructure components like servers aren’t exposed to the internet