2 Comments
User's avatar
Neural Foundry's avatar

The shift from viewing AI as just tecnology to recognizing it as cognitive infrastrucure is something I hadnt fully considered before. Your point about national AI compute reserves makes practical sense when you look at how strategic petroleum reserves function. The comparion to how electrical grids and telecom networks eventually required democratic oversight is particularly convincing. I wonder though if the five year timeline you mention is optimistic, given how slow regulatory bodies typically move relative to tech development?

Expand full comment
Rachel Maron's avatar

Regulatory bodies do move more slowly than technology. That’s the structural problem. If we wait for Congress or federal agencies to catch up on their own, we’ll be trying to retrofit governance onto infrastructure that’s already become indispensable and effectively irreversible.

The five-year window isn’t an estimate of when regulators will act. It’s a boundary on how long democratic societies can afford to delay action before cognitive infrastructure becomes another domain, like social media, where private architecture becomes de facto public policy.

When the decision-making substrate itself is privatized, oversight becomes nearly impossible.

This is why the analogy to electrical grids, telecom, and railroads matters: none of those systems were regulated before they became critical. They were controlled when the public realized the cost of leaving them ungoverned. But the lag was painful. And in AI, the cost of that lag won’t just be monopolies or price gouging; it will be loss of cognitive sovereignty.

National AI compute reserves, telemetry requirements, and safety envelopes aren’t about slowing innovation. They’re about building the conditions under which innovation can scale without destabilizing the polity it depends on.

So, yes, five years is optimistic (probably overly so) for government action. But five years is also the maximum buffer before the architectural decisions made by private actors become the operating system of public life. The timeline isn’t rhetorical; it’s a warning about how quickly power hardens into structure.

If regulators miss this window, they won’t be regulating AI companies in the future. They’ll be trying to regulate the environment that those companies have already built. And historically, that’s when democratic governance starts losing negotiating leverage. Just look at where we are with algorithmic content today.

Expand full comment