As Australia edges toward another election, a new and invisible threat stalks our democratic process, not from hackers in dark rooms, but from sophisticated, well-funded nation states wielding artificial intelligence.
According to Ginny Badanes, General Manager of Democracy Forward at Microsoft, 2024 has seen democracy itself placed in the crosshairs. Badanes pulls back the curtain on a new reality, that global elections aren’t just about ballots anymore. They’re about shadow operations, fake news factories, and AI-powered influence ops on a scale never before seen.
“We tracked North Korea, Russia, China, and Iran,” Badanes reveals. “Each with their own playbook, exploiting AI and cyber campaigns to unleash chaos and confusion in Western democracies. In some cases, it isn’t even about changing your vote, it’s about making you doubt the entire system.”
Think you can spot fake news? The new breed of AI-generated ‘pink slime’ sites are indistinguishable from legitimate local news, right down to their convincing mastheads and photo-realistic bylines.
“This isn’t just about political manipulation,” Badanes warns. “Sometimes, the motivation is money, ad impressions and clicks. But more often, it’s about laundering disinformation so well, it seeps into the mainstream without scrutiny.”
The deepfake dilemma is hitting hard with the emergence of deepfake videos, but the real threat now comes via audio. Your favourite politician, your CEO, even your mum – AI can now perfectly clone their voices in minutes.
“Audio deepfakes are incredibly convincing and virtually undetectable,” says Badanes. “In the recent US cycle, these tools were used to spread misinformation so seamlessly, victims didn’t realise they were being conned until it was too late.”
Influence Campaigns: From Social Media to Parliamentary Seats** What’s even more alarming? This activity isn’t limited to the highest offices.
“We’re seeing foreign actors targeting even local candidates whose policies don’t align with adversarial governments,” Badanes discloses. “Nobody is ‘too small’ to escape the crosshairs; especially if their platform threatens a foreign adversary’s interests.”
Deceptive AI is not just a political problem. While the headlines focus on elections, AI-powered deception is hitting well beyond the ballot box. Over 90% of deepfakes circulated today are pornographic shots targeting women, often public figures, aiming to silence their voices. And the stakes can be financial too: elderly citizens and corporate executives are falling prey to elaborate AI-enabled voice scams, losing millions in the process.
Is Australia next in the AI crosshairs? With our own election looming, Australia can’t afford to be complacent. But the solution doesn’t rest solely with tech giants – Badanes insists that a “healthy skepticism” is our best defence.
“Taiwan was targeted by deepfakes and propaganda during its elections, yet the population largely shrugged it off – because they’d been educated to expect it.”
Labelling and provenance technologies are emerging—tools that could, one day, empower voters to spot manipulated content at a glance. But, as Badanes cautions, the true front line is public awareness and resilience.
“We must foster trusted sources and equip the public with critical thinking skills. If everything becomes suspect, democracy itself falters.”
As defenders of information integrity, cybersecurity professionals, policymakers and media leaders must collaborate, innovate, and educate.