Artificial Intelligence (AI) is a transformative force in the realm of cybersecurity, playing a pivotal role in augmenting defence mechanisms against evolving cyber threats. AI technologies, such as machine learning and deep learning algorithms, empower cybersecurity systems to analyse vast amounts of data, detect patterns, and identify anomalies indicative of potential security incidents. These systems can enhance threat detection, automate response actions, and adapt to emerging attack vectors in real-time. AI is also instrumental in developing predictive analytics for anticipating and mitigating cyber threats before they manifest.
Shielding customers from scams and fraud and providing a safe online experience is paramount. Institutions that fail to do so will find themselves punished financially and reputationally, as regulators continue to bear down and consumers and businesses take their accounts elsewhere.
Recently on the DevSecOops podcast, hosts Tom Walker and Scott Fletcher sat down with George Abraham, CISO at Influx, to discuss the changing nature of cybersecurity ...
Introduction
Generative AI (GenAI) is actively reshaping the way attackers and defenders operate in Australia. Threat actors have weaponised GenAI to synthesise text, code, ...
The email looks real. It sounds like your boss. Thinking it might be urgent, you click the link included therein. And just like that, it’s over.
This is the new face of ...
Introduction
Cybercrime in Australia is rising rapidly. From the major breachers that held headlines hostage for months, to government agencies and critical infrastructure ...
Australia’s corporate leaders are sleepwalking into a technology blind spot that will cost them dearly. Shadow AI is already entrenched in workplaces, and boards that treat ...
AI has been making waves for years now. It has moved from the pages of science fiction into the control rooms of our defence and security agencies and critical ...
Quorum Cyber is working with Microsoft product teams to shape Sentinel product development, including validation of new scenarios, feedback on product operations, and API ...
With 34% of organisations suffering an AI-related breach, new Tenable report shows leadership is misjudging risk by focusing on reactive metrics instead of preventable threats
With 34% of organisations suffering an AI-related breach, new Tenable report shows leadership is misjudging risk by focusing on reactive metrics instead of preventable threats