The Quiet Breach at CISA
Authority, AI, and the collapse of restraint at the nation’s cyber defense agency.
I am interrupting our scheduled series about the Healthcare CISO to bring you a shining example of Trust collapse in action.
The Cybersecurity Chief and the Upload Button
When trust collapses, it rarely does so with a bang. It does it with a mouse click and file select, apparently.
There are scandals that feel cinematic. And then there are scandals that feel structural. This one is the latter.
According to reporting by ARS Technica, the acting head of the Cybersecurity and Infrastructure Security Agency uploaded sensitive government material into a public instance of ChatGPT last summer. The material was marked “for official use only,” which is bureaucracy-speak for information that is not classified but is explicitly restricted from public release. At least four documents containing contracting and cybersecurity information triggered multiple automated security alerts in the first week of August alone.
This is not a story about one man making a mistake. It is a story about institutional incoherence, authority without literacy, and a government that keeps confusing deployment with understanding.
What “Upload” Actually Means
Let’s be precise about what happened here. When you paste text into the public version of ChatGPT, you are not sending it to a secure vault. You are feeding it into a training surface used by hundreds of millions of users worldwide. The data becomes part of OpenAI’s ecosystem. The company is transparent about this: information you provide may be used to improve the model’s responses for other users.
DHS had already built DHSChat, an internal AI chatbot that operates within a secure, closed environment specifically designed to prevent user inputs from leaving federal networks. Data from DHSChat is not used to train external models. The tool was developed after extensive privacy impact assessments, with guardrails established through collaboration with cloud, cybersecurity, privacy, and civil rights experts across the department.
DHSChat was available to roughly 19,000 DHS headquarters employees at the time of the incident. It was designed for exactly the kind of work Madhu Gottumukkala, the acting CISA director, was attempting to do: summarizing contracting documents, processing internal material, generating analysis without exposing sensitive information to external systems.
Gottumukkala requested and received special permission to use ChatGPT shortly after arriving at CISA in May 2025. By May 2025, DHS had restricted access to commercial generative AI systems, directing employees to use internal tools. Most DHS employees could not access public AI platforms. For good reason.
But hierarchy substituted itself for judgment. Authority became its own justification.
The Permission Slip Problem
One anonymous official characterized the sequence bluntly: “He forced CISA’s hand into making them give him ChatGPT, and then he abused it.”
This is the first structural failure. Why was special permission granted at all?
This was not a junior analyst cutting corners because internal tools were slow or cumbersome. This was the top cyber defense official in the country insisting on access to a tool his own agency had deemed unsafe for general use. That is not innovation. That is hierarchy performing exemption from the rules it enforces on others.
Authority is not competence. Access is not literacy.
Following the incident, Gottumukkala held meetings with senior DHS and CISA officials, including legal and information security chiefs, to review the uploads and discuss the handling of sensitive material. This is what damage control looks like when the person who needs controlling is the one in charge of the controls.
The permission structure here reveals something corrosive. When leaders can exempt themselves from the constraints designed to protect the systems they oversee, those constraints become theater. They apply to subordinates. They dissolve for superiors. This is not governance. This is performance.
“Modernization” as a Get-Out-of-Jail-Free Card
When questioned about the incident, DHS spokespeople pointed to executive orders encouraging AI adoption across government. This framing treats policy as permission to bypass safety architecture.
This is the most dangerous sentence in modern governance: “We were told to deploy.”
Deployment without governance is how systems rot from the inside. AI is not a software update. It is an epistemic instrument. It absorbs what you give it, reflects it back in altered form, and redistributes risk in ways that are hard to trace and impossible to recall.
Uploading sensitive material into a public model is not a policy error. It is a category error. Treating AI like a search engine instead of a memory surface. Treating convenience like capability. Treating speed like strategy.
Once the data is in, you don’t get it back. Any information uploaded into a public version of ChatGPT is shared with OpenAI and may be used to help improve responses for other users. The material does not stay contained. It becomes part of the diffusion network.
The alternative was right there. DHSChat existed precisely to allow AI experimentation without this exposure. The tool was built to enable employees to leverage generative AI capabilities safely and securely using non-public data. It was designed for this exact use case.
Gottumukkala chose the public tool anyway.
The Anti-Trust Pattern
Zoom out, and a pattern emerges.
In May 2025, Gottumukkala told personnel at CISA that much of its leadership was resigning. Mass departures gut institutional memory. They signal that something inside the system has become untenable.
In June, Gottumukkala requested access to a controlled access program, an act requiring a polygraph examination. He failed the polygraph in the final weeks of July. Several CISA employees were subsequently placed on leave after the failed polygraph. DHS began investigating the circumstances surrounding the polygraph test and suspended six career staffers, telling them the polygraph did not need to be administered.
This is not what a high-trust system looks like. This is what happens when impunity outruns accountability.
Gottumukkala also attempted to remove CISA’s Chief Information Officer, Robert Costello, a move that other political appointees reportedly blocked. When leadership tries to remove oversight figures and faces internal resistance from within its own political layer, the dysfunction has metastasized.
Staffers reportedly called the tenure a “nightmare.” That word matters. Nightmares are not random. They are the psyche trying to warn you that something is wrong with the environment.
When leaders can make errors without consequence while subordinates absorb the blast radius, trust does not erode. It collapses. Quietly. Systemically.
Congressional Testimony and the Performance of Confidence
During congressional testimony in late January 2026, Gottumukkala rejected characterizations of the polygraph incident, stating he did “not accept the premise of that characterization”. This is the language of deflection masquerading as precision.
Congressional oversight exists to surface patterns. When leadership cannot articulate baseline threat forecasts, cannot maintain staff stability, cannot model the restraint its mission requires, the oversight function becomes diagnostic. It reveals the distance between institutional mandate and operational reality.
CISA exists to protect national trust surfaces: elections, infrastructure, coordination mechanisms. When its own leadership treats those surfaces casually, the danger is not just a single data leak. The danger is precedent.
If the cyber defense chief cannot model restraint around information handling, who exactly is supposed to?
The Real Risk Isn’t ChatGPT
To be clear, and frankly, I feel weird defending OpenAI, but: this is not about OpenAI behaving badly. OpenAI did not force anyone to upload government material. The platform operates according to its stated terms. Users agree to those terms when they use the service.
The real risk is governance theater. Leaders performing “modernization” while bypassing the very controls their agencies were built to enforce.
Cybersecurity is not about tools. It is about judgment under constraint. AI mirrors and amplifies whatever culture you put around it. In a coherent system, it has the potential to augment care. In a brittle one, it accelerates failure, it accelerates rupture.
The failure here is structural. Prior to his appointment at CISA, Gottumukkala served as the chief information officer of South Dakota under then-governor Kristi Noem, who became DHS Secretary under the Trump administration. This is a personnel pipeline, not a competency filter. Loyalty gets routed through institutional architecture as if loyalty were the same thing as capability.
It is not.
What Collapse Looks Like Now
No flames. No sirens. Just a quiet upload, multiple automated security alerts, an internal review, and a press statement about “Our commitment to innovation.”
A CISA spokesperson told Politico that Gottumukkala’s use of ChatGPT was “short-term and limited,” noting that he last used the tool in mid-July 2025 under an authorized temporary exception. This framing treats duration as exoneration. As if the problem was how long the exposure window stayed open rather than that it was opened at all.
Trust does not fail because of hackers alone. It fails when those in charge confuse speed with progress, permission with safety, and authority with wisdom.
The nightmare here is not that sensitive data might surface somewhere in an AI model’s training corpus. The nightmare is that the people responsible for preventing exactly that do not seem to understand why it matters.
DHS developed an entire internal AI infrastructure specifically to allow experimentation without this exposure. Privacy reviews. Security guardrails. Training protocols. A tool designed for the exact workflow Gottumukkala needed. He bypassed all of it.
And when automated systems caught the breach, when alarms fired exactly as designed, the response was not accountability. It was meetings. Reviews. Deflection. The machinery of looking serious without imposing consequences.
This is not a cybersecurity problem.
This is a governance failure.
And it is not an accident. It is a system working exactly as designed: to protect leadership from the constraints that bind everyone else. To perform competence while concentrating impunity. To demand trust while demolishing the conditions that make trust possible.
The collapse is quiet. The precedent is loud. And the people who should be listening are the ones who stopped paying attention the moment they received permission to act without restraint.



Rachel, spot‑on re: the CISA acting chief’s ChatGPT breach showing governance collapse. Just published Essay 1/4 on this exact incident. Would love your thoughts!
https://substack.com/@imanologya/note/p-186368970?r=6trngg&utm_medium=ios&utm_source=notes-share-action
I’m not an IT person, but this sounds like a horrific problem to me. These total incompetent fools are destroying our entire govt. We should all be really concerned.