Australia’s Next Insider Threat Might Be an Algorithm
As Australian organisations race to embed AI tools across human resources, finance and operations, they may be missing a critical security fault line. While disgruntled employees or contractors gone rogue remain a concern, the next major data breach is increasingly likely to come from an algorithm that never intended malice.
Posted: Tuesday, Dec 02

i 3 Table of Contents

Australia’s Next Insider Threat Might Be an Algorithm

Introduction

As Australian organisations race to embed AI tools across human resources, finance and operations, they may be missing a critical security fault line. While disgruntled employees or contractors gone rogue remain a concern, the next major data breach is increasingly likely to come from an algorithm that never intended malice.

Traditionally, “insider threat” has meant someone with authorised access who misuses credentials, steals IP, or simply unknowingly makes a costly error. The model is well-trodden. But in 2025, we are entering a different paradigm, one known as the algorithmic insider threat,  a generative AI system or agent embedded in enterprise workflows, quietly ingesting and sharing sensitive data far beyond human intent or awareness.

A Change In Face

Across Australia, companies are rushing to plug generative AI into daily operations, often without fully understanding where that data goes or how it’s used. HR teams are asking chatbots to draft performance reviews using real employee records. Finance departments are feeding confidential budgets and contracts into “AI co-pilots” for quick summaries. Marketing teams are using models trained on customer data to write campaigns. Piece by piece, organisations are creating digital insiders that have seen everything and remember all of it.

These systems don’t have motives. They don’t know what “confidential” means. They don’t differentiate between a public prompt and a privileged document. Ask the wrong question of the wrong model, and you could expose years of intellectual property or personal data in seconds. And because these models work invisibly behind APIs and plugins, few organisations even realise when sensitive information has been exposed or reused.

Generative AI doesn’t need to steal data to cause damage; it just needs to follow instructions. When a model trained on private data produces outputs that contain fragments of that information, or when it “learns” from confidential material and replicates it elsewhere, it behaves like an employee gossiping across departments, only at machine speed and a global scale.

A Critical Difference

Unlike human insiders, algorithmic insiders don’t leave fingerprints. Traditional security tools are built to detect human behaviour such as logins, downloads, and file transfers. But when the insider is an API, the signals look like normal system traffic. There’s no suspicious login, no offboarding event, no malicious email. Just a model doing what it was designed to do, absorb and optimise.

This gap in visibility is particularly worrying for Australia’s critical industries, where a single model can touch multiple data domains. An AI built to improve operational efficiency might also access procurement records, supplier contracts, and even customer data. Each new integration widens the blast radius. What used to be safely siloed is now stitched together by code.

And the governance frameworks aren’t keeping up. While Europe moves toward its AI Act, Australia’s approach remains voluntary. Boards are embracing AI to boost productivity and cut costs, but few are asking where the data lives, who can access it, or how it’s protected once it enters a model. The result is predictable. Organisations are moving faster than their risk frameworks can handle.

This isn’t a call to slam the brakes on innovation; in fact, far from it. AI has enormous potential to enhance decision-making, automate routine tasks and accelerate growth. But it must be governed like any other insider. That means giving AI systems defined identities, limited privileges, and continuous oversight. Treat every AI agent as you would a human with access to sensitive information: log its actions, track its data lineage, and restrict what it can see and share.

Conclusion

Boards and executives also need to rethink training. “Prompt hygiene,” or the awareness of what can and cannot be shared with an AI tool, is becoming as essential as phishing awareness. Employees need to know that an innocent query to a chatbot could create a permanent data exposure.

The uncomfortable truth is that the next insider threat may never clock in, never take leave, and never show intent. It will simply do what it was programmed to do too well, and without restraint.

Australia’s AI revolution is already underway. But unless governance catches up with innovation, the same tools driving productivity could quietly drive the next wave of data leaks. The algorithm doesn’t need to betray you. You gave it the keys.

Bob Huber
As Tenable’s Chief Security Officer, Head of Research and President of Tenable Public Sector, LLC, Robert Huber oversees the company's global security and research teams, working cross-functionally to reduce risk to the organization, its customers and the broader industry. He has more than 25 years of cyber security experience across the financial, defense, critical infrastructure and technology sectors. Prior to joining Tenable, Robert was a chief security and strategy officer at Eastwind Networks. He was previously co-founder and president of Critical Intelligence, an OT threat intelligence and solutions provider, which cyber threat intelligence leader iSIGHT Partners acquired in 2015. He also served as a member of the Lockheed Martin CIRT, an OT security researcher at Idaho National Laboratory and was a chief security architect for JP Morgan Chase. Robert is a board member and advisor to several security startups and served in the U.S. Air Force and Air National Guard for more than 22 years. Before retiring in 2021, he provided offensive and defensive cyber capabilities supporting the National Security Agency (NSA), United States Cyber Command and state missions.
Share This