Securing AI isn’t just about defending against cyber threats – it’s about establishing governance frameworks that ensure AI is used responsibly. With AI adoption accelerating, organisations are needing to align leadership, compliance teams, and IT security to manage risks without stifling innovation.
Janice Le, General Manager of Microsoft Security, explored the on-going challenges of governing AI and the strategic steps enterprises can take to secure its adoption.
“It’s actually not just about securing AI. It’s actually securing and governing the AI. And that’s quite honestly the first concern that a lot of our customers have.” Added Le.
Microsoft’s approach to securing AI follows three core steps:
- Discovery – Identifying where and how AI is being used, including consumer-grade tools like ChatGPT
- Governance – Implementing policies around data security, compliance, and ethical AI use
- Protection – Safeguarding AI systems from misuse, manipulation, and emerging threats
Le noted that while AI adoption is outpacing previous technological shifts like cloud computing, companies are more mindful of integrating security from inception. Microsoft’s Secure Future Initiative reflects this mindset, embedding security into every layer of development, with 34,000 full-time engineer equivalents focused purely on security.
AI is also changing security operations. Tools like Security Copilot are designed to reduce alert fatigue, automate routine tasks, and enhance threat detection, allowing cybersecurity teams to focus on high-impact challenges as well as critical thinking. Le sees AI as a force multiplier for defenders, helping companies combat cyber incidents.
Le went on to say, “What Security Copilot can do is to actually prioritise the things that you shouldn’t ignore. So help you find the signal from all the noise so that you can focus on the right things.”
Looking ahead, Le asserted that security is a ‘team sport’, urging greater collaboration across industries to simplify security solutions and strengthen global cyber defences.