“Government Shouldn’t Pick Winners”—But They Must Pick Boundaries
When Cognitive Infrastructure Becomes National Infrastructure
The Clarity After the Controversy
In November 2025, OpenAI’s CFO Sarah Friar sparked a firestorm when she suggested at a Wall Street Journal event that the U.S. government should provide “backstops”—loan guarantees—for the company’s trillion-dollar infrastructure buildout. The backlash was swift and severe, with David Sacks, the White House AI czar, declaring flatly: “There will be no federal bailout for AI.”
Sam Altman moved quickly to clarify: “We do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions.”
On this specific point—that taxpayers shouldn’t underwrite private company failures, Altman is absolutely right. Markets need failure. Creative destruction is capitalism’s immune system, and weakening it creates moral hazard at scale. The 2008 financial crisis proved this with brutal clarity: when private profits are privatized but losses are socialized, you don’t get innovation, you get recklessness subsidized by the public.
But beneath Altman’s correct position on bailouts lies a deeper question he systematically avoids addressing:
If AI becomes critical infrastructure—the cognitive nervous system mediating scientific discovery, governance coordination, healthcare, cyber-defense, and public communication—who sets the boundaries when that infrastructure exceeds human auditability?
Altman’s answer: ”the market,” is not an answer. It is an evasion.
And the contradiction at the heart of his position becomes stark when you examine what’s actually happening.
The Scale Reveals the Stakes
In January 2025, standing at the White House alongside President Trump, Altman announced the Stargate Project: a new company intending to invest $500 billion over four years in AI infrastructure across the United States, with $100 billion deployed immediately. The venture, a joint effort between OpenAI, SoftBank, Oracle, and investment firm MGX, has been compared to the Manhattan Project in scale.
By September 2025, OpenAI announced five new data center sites under Stargate, bringing the initiative to nearly 7 gigawatts of planned capacity and over $400 billion in investment over the next three years, ahead of schedule to secure the full $500 billion commitment by year’s end.
On top of Stargate, Altman has stated that OpenAI has “commitments of about $1.4 trillion over the next 8 years,” with the company expecting to reach over $20 billion in annualized revenue this year and grow to “hundreds of billions by 2030.”
These numbers aren’t abstract. They represent a fundamental restructuring of how cognitive work—reasoning, synthesis, prediction, coordination—gets done. The infrastructure being built won’t just support applications; it will shape which questions get asked, which options become visible, which pathways through problems become legible, and which decisions get automated by default.
This isn’t a search. This isn’t cloud computing. This is the emergence of an interpretive layer over society, one that can influence, coordinate, or misdirect entire populations at scale.
And here’s the contradiction: You cannot simultaneously claim that this infrastructure is so critical that it requires half a trillion dollars in investment while also claiming that its governance should be left entirely to market forces.
The Regulatory Reversal
The shift in Altman’s position on regulation is instructive. In May 2023, testifying before Congress, Altman supported the creation of a federal agency that could grant licenses to create AI models above certain capability thresholds and revoke those licenses if models didn’t meet safety guidelines. He proposed government-set safety standards for high-capability AI models and mandatory independent audits from experts unaffiliated with creators.
But by May 2025, Altman’s testimony had shifted dramatically. He called proposals requiring AI developers to vet their systems before deployment “disastrous” for the industry. When asked about having NIST set AI standards, he replied, “I don’t think we need it. It can be helpful.” He advocated for “sensible regulation that does not slow us down.”
The Brookings Institution noted this reversal starkly: “Altman’s testimony was worlds away from his 2023 appearance, when the primary focus of lawmakers was AI safety and regulation. Altman himself urged Congress at the time to implement regulations for AI technologies, emphasizing the potential risks if left unchecked.”
What changed? The money got bigger. The stakes got higher. And the regulatory environment got friendlier to industry interests.
But the fundamental question didn’t change; it only became more urgent.
Markets Create Value. They Do Not Contain Risk.
OpenAI’s trillion-dollar compute plan is not inherently dangerous. What is harmful is the rhetorical slip embedded in the industry’s position: “We will scale first. The market will deal with it if we’re wrong.”
Markets are wonderful engines of innovation. They excel at price discovery, resource allocation, and rewarding value creation. But they are catastrophically bad at risk containment, especially for collective, diffuse, and systemic risks.
Markets did not prevent the 2008 financial collapse. They did not regulate Boeing’s shortcuts until aircraft parts fell from the sky. They did not secure the electrical grid until the grid failed. They did not protect data privacy until Cambridge Analytica had already weaponized personal information at scale.
Markets are superb at distributing value. They are structurally incapable of mitigating harm when that harm is:
Diffuse (affecting populations broadly rather than concentrated stakeholders)
Delayed (manifesting quarters or years after the decisions that caused it)
Systemic (threatening the stability of the system itself rather than individual actors within it)
AI risk is all three. Which means it’s not a shareholder problem, it’s a civilizational one.
The Infrastructure Shift: From Tools to Cognitive Utilities
The reason this debate feels slippery is that our conceptual frames haven’t updated. Policymakers and industry leaders still talk about AI as if it’s “technology” rather than infrastructure, and infrastructure rather than cognitive infrastructure.
AI is not merely powering apps. It is increasingly shaping:
What options humans see (recommendation engines, search results, information feeds)
What actions are available (automated decision systems, access gates, platform affordances)
What predictions are taken as truth (risk assessments, diagnostic aids, predictive policing)
What scientific pathways become legible (drug discovery, materials science, climate modeling)
What decisions are automated by default (credit allocation, hiring screens, content moderation)
This is the emergence of a mediating layer between human judgment and consequential outcomes. When that layer becomes critical infrastructure—when society cannot function without it—it can no longer remain outside democratic governance simply because the companies building it are successful.
The parallel to historical infrastructure is exact: When private companies built the electrical grid, the railroad network, and the telecommunications system, they initially operated with minimal oversight. But as each became critical infrastructure, as society’s basic functioning came to depend on their reliable operation, democratic societies made a choice:
These systems were too critical to be governed solely by the profit motive.
Not because profit is bad. But because the market’s incentive structure optimizes for shareholder value rather than system stability. And when infrastructure failure threatens collective welfare, governance must internalize that externality.
The Real Question: Legitimacy, Not Revenue
The heart of the issue is simple:
Private firms can innovate. Only public institutions can confer legitimacy.
AI systems that increasingly mediate human judgment cannot remain outside democratic oversight simply because the companies that build them are successful. Revenue doesn’t establish the right to govern. Elections do. Accountability mechanisms do. Democratic legitimacy does.
Altman says governments shouldn’t pick winners. He’s right.
But governments must pick boundaries, because boundaries determine whether the society these systems operate in remains:
Stable
Sovereign
Democratic
Safe
Governable
The risk is not that OpenAI becomes too big to fail.
The risk is that democratic societies become too fragile to sustain failure.
What Public Governance of Cognitive Infrastructure Looks Like
If we take seriously the idea that AI is becoming critical infrastructure, the governance question isn’t whether, but how. Here’s what a public covenant for cognitive infrastructure might include:
1. National AI Compute Reserves
Just as nations maintain strategic petroleum reserves and emergency antiviral stockpiles, the coming decades will require government-owned model capacity, not to compete with private companies, but to ensure public continuity and prevent a single firm from becoming de facto cognitive sovereign.
Notably, Altman himself has acknowledged this need: “Building a strategic national reserve of computing power makes a lot of sense. But this should be for the government’s benefit, not the benefit of private companies.”
This reserve would:
Guarantee compute availability for public safety, science, and emergency response.
Serve as a failover if private systems collapse or are withdrawn.
Provide baseline infrastructure for research, universities, and public institutions.
Prevent vendor lock-in for critical governmental functions
This is not about nationalizing AI. It’s about national resilience.
2. Public AI Oversight Boards with Investigative Authority
AI oversight cannot remain a patchwork of “industry advisory councils,” voluntary commitments, and carefully worded press releases.
We need public bodies with investigative authority, analogous to:
The NTSB (accident investigation)
The FCC (communications infrastructure)
The FDA (safety and efficacy evaluation)
The GAO (audits and accountability)
Their mandate:
Evaluate catastrophic-risk scenarios through adversarial testing.
Audit model behavior and training data pipelines
Enforce safety envelopes and deployment thresholds.
Coordinate emergency response to misuse events.
Publish findings transparently except where national security requires classification.
Not oversight as suggestion; oversight as governance.
3. Mandatory Model Telemetry (”Black Boxes for Algorithms”)
Every commercial aircraft carries a flight recorder. Every AGI-class model must as well.
Mandatory telemetry would include:
Execution logs for high-risk actions
System-state snapshots for post-incident forensic analysis.
Usage provenance (what prompts, what contexts, what users)
Safety-rail activations and failures
Records of escalations and override events
Telemetry is not surveillance of users; it is the minimum viable substrate for forensic accountability. If an AI system accelerates a cyberattack, causes medical harm, or spreads a catastrophic false signal, the public must be able to reconstruct what happened and why.
You cannot have accountability without auditability. And you cannot have auditability without instrumentation.
4. Model Audit Trails and Provenance Documentation
AI without lineage is ungovernable.
An audit trail documents:
What the model was trained on (data provenance)
What fine-tunes were applied and when
What patches or safety updates were deployed
What test results were documented at each capability threshold
How the model’s behavior changed across versions
Pharmaceutical companies must document every ingredient, every process step, and every batch variation. Financial institutions must maintain audit trails for every transaction. AI systems that shape consequential decisions require the same.
You cannot regulate what you cannot track. You cannot trust what you cannot trace.
5. Safety Envelopes (”Speed Limits for Cognitive Systems”)
Every form of critical infrastructure has operational limits:
Nuclear plants have regulatory containment thresholds.
Aviation has maximum tolerances and redundancy requirements.
Pharmaceuticals have dosage ceilings and interaction warnings.
Financial markets have circuit breakers to prevent cascading failures
AI needs the same: safety envelopes that define allowable operational boundaries.
Examples:
Maximum autonomous action permissions before mandatory human review
Rate limits for self-modification or recursive improvement
Escalation gates for high-risk reasoning chains
Mandatory human handoff for specific categories of consequential decisions
Hard prohibitions on specific outputs (e.g., detailed instructions for bioweapons synthesis)
These are not restraints on innovation. They are the engineering controls that allow innovation to scale safely.
We don’t let pharmaceutical companies skip Phase II trials because innovation is exciting. We don’t let nuclear plants operate without containment because energy is urgent. We don’t exempt self-driving cars from safety standards because autonomy is the future.
The same logic applies to cognitive infrastructure.
Addressing the “China Competition” Argument
The industry’s primary counter-argument is that regulation will cede AI leadership to China. Altman emphasized this concern directly in his May 2025 testimony: “The future of artificial general intelligence can be almost unimaginably bright, but only if we take concrete steps to ensure that an American-led version of AI, built on democratic values like freedom and transparency, prevails over an authoritarian one.”
This argument contains a kernel of truth wrapped in a logical fallacy.
The kernel of truth: AI leadership matters. The nation that sets the standards, builds the infrastructure, and defines the norms will shape how this technology develops globally. Ceding that position would have profound geopolitical consequences.
The fallacy: That we must choose between innovation speed and safety. That governance mechanisms necessarily slow development. Democratic values cannot coexist with technological leadership.
History suggests otherwise. The United States led in aviation precisely because it developed rigorous safety standards. It led in pharmaceuticals because FDA approval became the global benchmark. It led in financial markets because regulatory frameworks created trust.
In each case, governance didn’t prevent leadership; it enabled it. Because safety creates trust, and trust creates adoption at scale.
More fundamentally, if “American-led AI built on democratic values” is the goal, then those democratic values must include democratic governance. You cannot claim to build AI aligned with freedom, transparency, and democratic principles while simultaneously arguing that democratic institutions should have no meaningful oversight over its development and deployment.
The choice isn’t between American leadership and Chinese dominance. The choice is between governed American leadership and ungoverned corporate sovereignty; a system in which private entities wield infrastructural power without public accountability.
Public opinion reflects these concerns: A 2025 Heartland survey found that 72% of U.S. adults expressed concerns about AI, including privacy intrusions, cybersecurity risks, a lack of transparency, and racial and gender biases. These doubts span partisan lines; Americans across the political spectrum increasingly question whether new technologies should be embraced without demonstrated track records of safety, fairness, and security.
The Contradiction at the Core
Return to the numbers. OpenAI’s CFO, Sarah Friar, told CNBC regarding the Stargate buildout: “No one in the history of man built data centers this fast.” The urgency is real. The scale is unprecedented. The infrastructure being constructed will shape civilization for decades.
But this scale and urgency work against the “leave it to the market” position, not in its favor.
If the technology is so transformative that it requires $500 billion in immediate investment... If the compute demands are so critical that “infrastructure is destiny”... If the applications are so fundamental that they’ll reshape healthcare, defense, governance, and scientific discovery...
Then it’s too important to be left to ungoverned private actors whose fiduciary duty is to shareholders, not citizens.
Altman himself acknowledged the stakes in his November statement: “If we screw up and can’t fix it, we should fail, and other companies will continue on doing good work and servicing customers. That’s how capitalism works.”
But this framing only works for consumer products. It doesn’t work for critical infrastructure. When Boeing’s 737 MAX failed, people died, but commercial aviation survived because the regulatory system eventually forced accountability. When Enron collapsed, portfolios burned, but financial markets continued because oversight mechanisms (however imperfect) maintained system legitimacy.
What happens when critical cognitive infrastructure fails? When the systems mediating scientific research, coordinating public health responses, or informing governance decisions prove unreliable or compromised?
You can’t “let another company take its place” if the failure cascades through systems that society depends on for basic functions. You can’t rely on market discipline if the externalities lead to societal collapse.
The Political Economy of Cognitive Sovereignty
There’s another dimension here that the public discourse largely misses: This isn’t just about safety. It’s about power.
AI infrastructure determines:
Who has access to the capability?
What questions can be asked?
What answers become authoritative?
What coordination becomes possible?
What governance becomes enforceable?
When that infrastructure is privately owned and operated without democratic oversight, you’ve created a form of cognitive sovereignty; the power to shape what becomes thinkable, actionable, and governable.
No democratic society can outsource sovereignty to private actors and remain democratic. Not military sovereignty. Not monetary sovereignty. Not judicial sovereignty.
And not cognitive sovereignty.
This isn’t hypothetical. We’ve already seen how platform architecture shapes political discourse, how algorithmic curation influences election outcomes, and how recommendation systems alter cultural consumption patterns. Those effects were with relatively simple systems.
When the systems become complex enough to mediate scientific discovery, coordinate multi-sector responses to crises, or optimize resource allocation at a civilizational scale, the power embedded in their architecture becomes functionally governmental.
Which means it must be governed democratically, not because democracy is perfect, but because it’s the only legitimate basis for exercising collective power in a free society.
What Altman Gets Right (And What He Misses)
Altman is correct that:
Markets drive innovation better than central planning.
The government shouldn’t pick winners among competing companies.
Bailouts create moral hazard and misallocate capital.
The United States should maintain AI leadership.
Building infrastructure at scale requires private sector efficiency
Where he errs is in treating these truths as sufficient answers to the governance question.
Acknowledging that markets drive innovation doesn’t resolve who sets the boundaries within which that innovation operates. Opposing government winner-picking doesn’t eliminate the need for public oversight of systems that become critical infrastructure. Rejecting bailouts doesn’t address whether public institutions should have meaningful regulatory authority.
The gap in his argument is the gap between economic policy and governance architecture. He addresses the former while evading the latter.
And that evasion becomes most visible in the contrast between his words and his actions. He says governments shouldn’t guarantee private infrastructure, while operating an infrastructure project announced at the White House with the President, framed explicitly as serving national interests, with Trump using executive orders to help facilitate the buildout.
He says regulation shouldn’t slow development, while his company submits policy white papers urging expanded federal support for AI infrastructure and positioning data centers as eligible for industrial subsidies.
He says the market should determine outcomes while lobbying for favorable policy frameworks, energy access, and public coordination with private buildouts.
The dissonance is the tell. OpenAI doesn’t actually want the government out of the picture. It wants the government to serve as a facilitator, not a regulator. It wants public resources channeled toward private goals without public accountability over private power.
That’s not capitalism. That’s not democracy. That’s corporate sovereignty dressed in the language of free markets.
The Choice Ahead
We stand at an inflection point. The infrastructure being built over the next five years will determine the architecture of power for the next fifty.
The question is not whether AI will reshape society. It will.
The question is whether that reshaping happens under democratic governance or corporate sovereignty.
Whether the boundaries are set by public institutions accountable to citizens or private entities accountable to shareholders.
Whether the power concentrated in cognitive infrastructure remains contestable through democratic processes or becomes uncontestable through infrastructural fait accompli.
Sam Altman is right: Governments shouldn’t pick winners. Markets should determine which companies succeed in competitive landscapes.
But governments must pick boundaries, because those boundaries determine whether the civilization these systems operate in remains:
Stable (resilient to failure rather than fragile to disruption)
Sovereign (governable by democratic processes rather than corporate fiat)
Safe (engineered for containment rather than optimized solely for capability)
Accountable (auditable and contestable rather than opaque and absolute)
Legitimate (deriving authority from democratic consent rather than market dominance)
The trillion-dollar buildout is happening regardless. The question is whether it happens with guardrails or without them. With public accountability or without it. With democratic legitimacy or without it.
Because the alternative to public governance isn’t “no governance,” it’s private governance by entities whose power grows with every petabyte of compute, every billion in investment, every percentage point of market concentration.
And that is the point Altman’s argument never touches, because acknowledging it would require admitting that “leave it to the market” isn’t a principled position about innovation.
It’s a strategic position about power.
The cognitive infrastructure being built today will mediate how humanity thinks, coordinates, and governs tomorrow. That’s too important to be left to unaccountable private control—not because markets are bad, but because democracy is good. And democracy requires that infrastructural power remain contestable by those who live under its influence.
The question isn’t whether we can afford robust governance of AI. The question is whether we can afford not to.



The shift from viewing AI as just tecnology to recognizing it as cognitive infrastrucure is something I hadnt fully considered before. Your point about national AI compute reserves makes practical sense when you look at how strategic petroleum reserves function. The comparion to how electrical grids and telecom networks eventually required democratic oversight is particularly convincing. I wonder though if the five year timeline you mention is optimistic, given how slow regulatory bodies typically move relative to tech development?