Anthropic Mythos and the End of Temporal Arbitrage
How Automated Vulnerability Discovery Collapses The Deferred-Risk Model Behind
Note: Many of my colleagues have sent me links to the Anthropic Mythos story, all of them asking me for guidance on how to think about this development. This story is not a technical story, but one of deferred incentives. It is through that lens that I run the following analysis
For three decades, venture-backed software investors and operators operated under a stable assumption: vulnerability discovery was scarce, uneven, and expensive. This assumption shaped everything. Backlogs could grow without immediate consequence. Security could be deferred behind roadmap pressure. Quality could be subordinated to velocity because exposure was probabilistic and delayed. Anthropic’s Mythos has invalidated these core assumptions in ways that materially impact capital allocation. Unlike prior generation tools that required integration, configuration, and human triage to function at scale, Mythos operates as an autonomous agent across heterogeneous estates without those constraints. Automated vulnerability discovery systems like Mythos collapse the cost and time required to locate exploitable software defects across fleets and estates. They do not create new classes of defects but instead remove the scarcity constraint that previously limited how quickly existing failures were found. The effect is structural and cross-domain. Latent security debt is no longer latent: it is continuously discoverable.
This fundamentally changes the economic character of software as a class. Unresolved vulnerabilities no longer behave like inventory. Instead, they behave like unbooked liabilities where each deferred patch carries a time-bound exposure whose probability of discovery is increasing and whose time to exploitation is compressing. The security backlog, long a neutral queue of work with no user persona to build for, became a ledger of accepted risk under a discovery regime that no longer supports deferral. This is the surface description of the challenge, but it is not the underlying mechanism which is best described as temporal arbitrage as a capital strategy.
Venture-backed software was financed under an assumption that time itself could be exploited. The organization ships before it knows, accumulates defects as a byproduct of meeting capital timelines, converts early adoption into valuation, and exits or recapitalizes before the accumulated consequences fully materialize. The codebase, the company, and even the users may be treated as disposable. Strategically, the firm is treated as a vehicle for value rather than a compounding asset, where ultimate outcomes are expected to be realized through timed capital events. In this model, security, resilience, and correctness are deferrable costs; the system prices them into the future and assumes that future can be outrun.
This is the equilibrium that has held in the venture software business since 1995. Automated vulnerability discovery and exploitation breaks that equilibrium completely. When vulnerability discovery becomes dense, systematic, and inexpensive, the future is no longer distant; the accumulated consequences embedded in the codebase are pulled forward. The time window between creation and exposure compresses and the organization is forced to confront the totality of its deferred decisions within operating time, not exit time. Overnight, the security backlog stops behaving like inventory and begins behaving like an unplanned matured obligation.
The obvious response to this observation has been to say, “the same automation that accelerates discovery will also accelerate remediation. Detection systems will find the defects. Patching systems will fix them. Static analysis, dynamic analysis, code generation, dependency intelligence, and automated repair will scale together. The system, in this account, will preserve equilibrium through reciprocal automation.” That response feels natural, but it fundamentally misunderstands the asymmetry in cost and motion. Detection is a search problem that benefits directly from scale, parallelism, and pattern recognition. Once the cost of search collapses, discovery density increases across the entire surface. Remediation is a transformation problem. It requires understanding intent, dependency interaction, state behavior, regression risk, business impact, and release safety.
AI-driven automation can reduce portions of the remediation duty but it cannot erase the coordination cost, validation burden, or system-risk burden attached to change. Faster discovery of security defects does not produce proportional remediation capacity: it produces intake pressure. When both sides automate, the system accelerates: discovery improves, exploitation improves, patch generation improves, and exploit adaptation improves. The temporal cycle that drives capital strategy compresses. The organization is no longer managing a software vulnerability program, but instead finds itself inside a machine-speed contest over the integrity of its own artifact. This is now the baseline condition of software companies, writ large; it is the new condition.
This is not the first time the world has been confronted with an all-hands-on-deck critical software defect. For those of us that worked through Y2K, we can understand this at the level of forced remediation; however, that understanding fails at the level of structure because Y2K work was bounded by a known defect class and a fixed date. The current condition of infinite discovery and exploitation at negligible cost has neither boundary nor termination: every codebase, every dependency tree, every internal tool, every acquired system, and every integration point exists inside a persistent discovery field. This global persistence removes the go-to capital strategy: escape through delay.
It also removes the possibility of escape through the fantasy of later automation. The claim that future patching systems will absorb present negligence is the old temporal arbitrage restated in technical form; it assumes that consequence can still be pushed forward because a later capability will carry it. That assumption has already failed once, with the implications extending far beyond engineering. A company whose software cannot withstand systematic automated inspection by hostile AI is not merely insecure: it is mispriced. Its valuation assumed that certain classes of risk would remain undiscovered long enough to be irrelevant to capital realization. That assumption is no longer defensible. Value, in this context, shifts location.
Under temporal arbitrage strategies, value was located in motion: user growth, revenue acceleration, narrative dominance, and the ability to reach successive funding events before operational contradiction can condense. Under a continuous defect discovery paradigm, value relocates to the artifact itself: the quality, resilience, and trustworthiness of the system under inspection. This ontological shift redefines what it means for a software company to hold value. Software now mediates health systems, financial systems, logistics, communications, governance, and social coordination. It functions as infrastructure even when the company that produced it did not intend to become infrastructure. People depend on software with the quiet expectation that it will safely hold their value object. That dependence is not erased because the software was financed as a temporary capital vehicle.
An operating model that introduces defects freely and trusts future machines to repair them is incompatible with that role; thus, the operating model must evolve accordingly. The relevant question is no longer whether vulnerabilities exist. They do. The question is whether the organization can operate under conditions where vulnerabilities are continuously surfaced by AI and must be resolved within a compressed time horizon. This introduces a new control variable for value defense: remediation velocity. Remediation velocity is a measure of whether the organization can absorb and neutralize a continuous stream of adversarially discovered security defects without destabilizing widget delivery. Most organizations are not structured for this condition. They are structured for episodic audits, periodic testing, and reactive incident response. Those operating models assume defect discovery scarcity, an assumption which no longer holds.
As security defect discovery becomes dense and systematic, several shifts occur. First, exposure becomes attributable. A deferred vulnerability is a recorded decision to carry known risk into an environment where discovery is expected. Second, perimeter distinctions erode. Internal systems, staging environments, legacy services, and vendor integrations are subject to the same discovery dynamics as externally facing infrastructure. The classification of “non-critical” becomes unstable once lateral movement and chaining are trivial. Third, standards of care move. As automated discovery becomes widely available, the definition of reasonably knowable risk expands. Liability frameworks and insurance models will adjust accordingly. Assertions of posture will carry less weight than demonstrated and evidenceable response performance.
These shifts converge on a single condition: the firm’s software fleet is now a capital liability surface under continuous adversarial inspection. This is not a security problem to be solved. It is a balance sheet correction to be absorbed. The corrective is not another tool layer or stack, but an operating model that treats trust, quality, resilience, and security as capital value conditions rather than downstream operational controls. That model must connect software production to value defense, capital timing, buyer confidence, diligence readiness, insurance exposure, and executive accountability.
This requires a transition from backlog management to Exposure Management. Exposure Management requires measurable control over three dimensions: coverage, latency, and dependency governance.
Coverage defines the proportion of the software estate subject to continuous and instrumented analysis.
Latency defines the time between identification and remediation.
Dependency Governance defines the control of external code ingestion and the minimization of unnecessary surface area.
Security defect discovery must be treated as a production input because the organization must be able to ingest security findings at scale, triage them based on exploitability and impact more quickly than an adversarial AI can, and execute remediation as a continuous function on a living codebase. But this is only the minimum requirement. The deeper requirement is trust value management. A software company operating under this condition must know where trust value is created, where trust value is consumed, where trust value is degraded, and where trust debt is accumulating inside the stakeholder value journey. It must be able to translate software quality into financial exposure, customer confidence, sales velocity, diligence defensibility, and valuation protection. It must be able to keep the velocity thesis alive without depending on ignorance, deferral, or delayed consequence.
That is the proper strategic correction to the capital operating model, not a “compliance project” or a “risk management initiative.” An operating model capable of producing trust at machine speed requires that trust itself be treated as a manufactured, measurable asset; one with production inputs, quality controls, and verifiable outputs. Ultimately, the answer to the automated vulnerability discovery panic that Mythos sparked is an operating model that makes trustworthy software production a governed value system. For decades, the industry converted future consequences into present value by assuming that those future consequences could be outrun. Automated security defect discovery removes that assumption, and automated patching does not restore it but instead accelerates the field in which the original debt must now be paid. The security backlog is a liability exposure being systematically searched by value-eroding adversaries. The timeline for temporal arbitrage has been rendered brittle and short horizon. The dominant question is no longer whether the organization contains defects. It does. The dominant question is whether the organization has an operating model capable of producing trust at the speed of automated adversarial discovery.
Addenda:
Automated security defect discovery changes the economic character of software by converting latent security debt into continuously discoverable exposure.
If this analysis is wrong, the evidence will appear as stable remediation ratios, unchanged underwriting assumptions, unchanged diligence practices, and no measurable increase in defect-driven pressure from customers, insurers, acquirers, or regulators.
If this analysis is correct, the industry response will be narrative dilution, absorbing this argument into tooling strategies and existing compliance categories without adopting the operating model change it requires. That response is materially insufficient: rebranding the security stack is not the same as evolving the capital operating model to meet the moment.


