Cybersecurity Chief’s AI Slip Sparks Federal Security Review
Posted: Monday, Feb 02
  • KBI.Media
  • $
  • Cybersecurity Chief’s AI Slip Sparks Federal Security Review
Karissa Breen, more commonly known as KB, is crowned a LinkedIn ‘Top Voice in Technology’, and widely recognised across the global cybersecurity industry. A serial entrepreneur, she is the co-founder of the TMFE Group, a portfolio of cybersecurity-focused businesses spanning an industry-leading media platform, a specialist marketing agency, a content production studio, and the executive headhunting firm, MercSec. Now based in the United States, KB oversees US editorial operations and leads the expansion of the group’s media footprint across North America, while maintaining a strong presence in Australia, and the broader global market. She is the former Producer and Host of the streaming show 2Fa.tv, and currently sits at the helm of journalism for the group’s flagship arm, KBI.Media, the independent cybersecurity media company. As a cybersecurity investigative journalist, KB hosts her globally-renowned podcast, KBKast, where she interviews leading cybersecurity practitioners, CISOs, government officials including heads-of-state, and industry pioneers from around the world. The podcast has been downloaded in over 65 countries with more than 400,000 global downloads, influencing billions of dollars in cybersecurity budgets. KB is known for asking the hard questions and extracting real, commercially relevant insights. Her approach provides an uncoloured, strategic lens on the evolving cybersecurity landscape, demystifying complex security issues and translating them into practical intelligence for executives navigating risk, regulation, and rapid technological change.

i 3 Table of Contents

Cybersecurity Chief’s AI Slip Sparks Federal Security Review

The acting director of Cybersecurity and Infrastructure Security Agency (CISA), Madhu Gottumukkala, is facing industry scrutiny after accidentally uploading sensitive government documents into a public version of ChatGPT, according to reports online.

The documents were luckily not classified, but they were marked ‘For Official Use Only’, which is for material explicitly meant to stay inside secure government systems. Their appearance inside a consumer AI platform triggered internal security alerts and prompted a damage assessment by the Department of Homeland Security (DHS), which oversees CISA.

“Education about the privacy and cybersecurity implications of AI usage has been largely missing from the massive push toward AI adoption. And what this incident shows is that organizations and individuals don’t seem to learn from each others’ documented failures,” said Irina Raicu, Director, Internet Ethics at Markkula Center for Applied Ethics at Santa Clara University.

At the time of the uploads, ChatGPT was blocked for most DHS employees due to concerns about data leakage and external retention of sensitive information. Gottumukkala reportedly had special permission to access the tool, but that exception has now become the core of the controversy.

“At the most fundamental level, this is not an AI problem. It is a problem of governance and workflows,” said Chris Hutchins, Founder and CEO of Hutchins Data Strategy Consulting. “Generative AI tools exhibit an informal and low friction interface that decreases the psychological barriers that protect sensitive information and encourages the informal handling of sensitive information.”

The irony was kinda hard to ignore though. The Government official tasked with defending America has allegedly bypassed the very controls his agency promotes?

CISA sits at the centre of United States cyber defence, to assist and coordinate responses to ransomware, nation-state threats, and attacks on critical infrastructure. Its overall guidance helps shape how federal agencies and private companies manage cyber risk.

Security experts warn the incident highlights a growing disconnect between AI enthusiasm at the top and governance on the ground. Public generative AI tools may feel harmless, but once sensitive data leaves controlled environments, the risk calculus changes fast.

Even when documents aren’t classified, exposure can still create intelligence value, reveal operational details or erode trust in leadership.

“CISA has defended Gottumukkala’s actions, saying his use of ChatGPT was limited, approved and short term ending in mid 2025 and that safeguards were in place.” Hutchins went on to say, “Officials stress that ChatGPT remains blocked by default across DHS.”

​The most critical risk is that these problems rarely surface at the moment of usage. Instead, they emerge later during incidents reviews, customer inquiries, audits, or legal discovery. Why is this likely to reoccur given the same conditions?

Hutchins added, “This case is not an indictment of the use of AI. However, it is an indictment of the gaps in governance with the use of AI tools.”

But critics online argue the explanation misses the point, that rules that bend for leadership are rules that fail. It also can have downstream impacts for others more broadly to follow suit.

​“Until AI enthusiasts can explain indirect prompt injections as well as they can explain less nefarious prompting, we will live in a world of increasing risk,” added Raicu.

The bigger question most security folks are asking is whether government and corporate America are moving faster on AI adoption than on accountability.

If the head of the nation’s cyber defence agency can make this mistake, skeptics ask, what’s happening inside less mature organisations with weaker controls and fewer guardrails? The answer is, who knows.

Share This