Cybersecurity Chief’s AI Slip Sparks Federal Security Review
Posted: Monday, Feb 02
  • KBI.Media
  • $
  • Cybersecurity Chief’s AI Slip Sparks Federal Security Review
Karissa Breen, crowned a LinkedIn ‘Top Voice in Technology’, is more commonly known as KB, and widely known across the cybersecurity industry. A serial Entrepreneur and co-founder of the TMFE Group, a holding company and consortium of several businesses all relating to cybersecurity. These include an industry-leading media platform, a marketing agency, a content production studio, and the executive headhunting firm, MercSec. She is also the former Producer and Host of the streaming show, 2Fa.tv. Our flagship arm, KBI.Media, is an independent and agnostic global cyber security media company led by KB at the helm of the journalism division. As a Cybersecurity Investigative Journalist, KB hosts her renowned podcast, KBKast, interviewing cybersecurity practitioners around the globe on security and the problems business executives face. It has been downloaded in 65 countries with more than 300K downloads globally, influencing billions of dollars in cyber budgets. KB is known for asking the hard questions and getting real answers from her guests, providing a unique, uncoloured position on the always evolving landscape of cybersecurity. She sits down with the top experts to demystify the world of cybersecurity, and provide genuine insight to executives on the downstream impacts cybersecurity advancement and events have on our wider world.

i 3 Table of Contents

Cybersecurity Chief’s AI Slip Sparks Federal Security Review

The acting director of Cybersecurity and Infrastructure Security Agency (CISA), Madhu Gottumukkala, is facing industry scrutiny after accidentally uploading sensitive government documents into a public version of ChatGPT, according to reports online.

The documents were luckily not classified, but they were marked ‘For Official Use Only’, which is for material explicitly meant to stay inside secure government systems. Their appearance inside a consumer AI platform triggered internal security alerts and prompted a damage assessment by the Department of Homeland Security (DHS), which oversees CISA.

“Education about the privacy and cybersecurity implications of AI usage has been largely missing from the massive push toward AI adoption. And what this incident shows is that organizations and individuals don’t seem to learn from each others’ documented failures,” said Irina Raicu, Director, Internet Ethics at Markkula Center for Applied Ethics at Santa Clara University.

At the time of the uploads, ChatGPT was blocked for most DHS employees due to concerns about data leakage and external retention of sensitive information. Gottumukkala reportedly had special permission to access the tool, but that exception has now become the core of the controversy.

“At the most fundamental level, this is not an AI problem. It is a problem of governance and workflows,” said Chris Hutchins, Founder and CEO of Hutchins Data Strategy Consulting. “Generative AI tools exhibit an informal and low friction interface that decreases the psychological barriers that protect sensitive information and encourages the informal handling of sensitive information.”

The irony was kinda hard to ignore though. The Government official tasked with defending America has allegedly bypassed the very controls his agency promotes?

CISA sits at the centre of United States cyber defence, to assist and coordinate responses to ransomware, nation-state threats, and attacks on critical infrastructure. Its overall guidance helps shape how federal agencies and private companies manage cyber risk.

Security experts warn the incident highlights a growing disconnect between AI enthusiasm at the top and governance on the ground. Public generative AI tools may feel harmless, but once sensitive data leaves controlled environments, the risk calculus changes fast.

Even when documents aren’t classified, exposure can still create intelligence value, reveal operational details or erode trust in leadership.

“CISA has defended Gottumukkala’s actions, saying his use of ChatGPT was limited, approved and short term ending in mid 2025 and that safeguards were in place.” Hutchins went on to say, “Officials stress that ChatGPT remains blocked by default across DHS.”

​The most critical risk is that these problems rarely surface at the moment of usage. Instead, they emerge later during incidents reviews, customer inquiries, audits, or legal discovery. Why is this likely to reoccur given the same conditions?

Hutchins added, “This case is not an indictment of the use of AI. However, it is an indictment of the gaps in governance with the use of AI tools.”

But critics online argue the explanation misses the point, that rules that bend for leadership are rules that fail. It also can have downstream impacts for others more broadly to follow suit.

​“Until AI enthusiasts can explain indirect prompt injections as well as they can explain less nefarious prompting, we will live in a world of increasing risk,” added Raicu.

The bigger question most security folks are asking is whether government and corporate America are moving faster on AI adoption than on accountability.

If the head of the nation’s cyber defence agency can make this mistake, skeptics ask, what’s happening inside less mature organisations with weaker controls and fewer guardrails? The answer is, who knows.

Share This