The Catastrophe That Trust Could Have Prevented
A ransomware attack didn’t just expose Ascension’s systems; it revealed how executives prioritized efficiency over trust, and patients paid the price.
How Ascension Failed to Manufacture Trust (and How Patients Pay the Price)
To the Reader
This piece responds to Ars Technica's "How Weak Passwords and Other Failings Led to the Catastrophic Breach of Ascension." That story frames the breach as a technical lapse, a narrative of weak passwords, legacy protocols, and misconfigured systems that reads like a cybersecurity postmortem from any of the last two decades. Here we reframe it: breaches are never merely technical accidents waiting to happen. They are manufactured through deliberate management strategy, carefully constructed liability shields, and systemic incentives that trade away trust for short-term efficiency. What follows is an examination of how trust should have been manufactured at Ascension, why it wasn't, and why these failures continue to recur across industries with clockwork predictability.
The technical details of the Ascension breach, the contractor's laptop, the malicious link, and the Kerberoasting attack are symptoms masquerading as causes. The real story lies in the absent architecture of trust manufacturing, the missing governors that should have tethered presentation to proof, and the liability-free zone that shields executives from the consequences of their negligence while patients bear the costs. This is not a story about cybersecurity. It is a story about power, accountability, and the systematic destruction of trust as an asset.
The Catastrophe
In May 2024, Ascension, a healthcare system comprising 140 hospitals across 19 states, suffered a ransomware breach that not only stole data but also paralyzed the fundamental infrastructure of patient care. Electronic health records went offline without warning. Surgical schedules that algorithms had optimized for months disintegrated into chaos. Personal data for 5.6 million patients was stolen, but that number understates the true scope of the disaster. On the ground, the consequences were immediate and life-threatening.
Nurses found themselves thrust back into a pre-digital age, tracking medications on scraps of paper and hoping their handwriting would be legible to the next shift. The invisible choreography of modern healthcare, the constant digital exchange of lab results, medication orders, allergy alerts, and care notes, vanished. Surgeons faced delays as operating-room schedules, which were usually automated and optimized in real-time, had to be manually reassembled. Lab results were slowed to a crawl, sometimes fatally, as orders flowed through improvised analog workarounds that bypassed the safety checks built into digital systems. Ambulances were rerouted away from Ascension facilities, not because there were no beds available, but because the systems that tracked bed availability, patient acuity, and care capacity were suddenly inaccessible.
This was not merely an IT failure. This was the collapse of a trust system on which human life directly depends. Patients entered hospitals carrying the reasonable assumption that continuity of care included not just medical competence but systemic reliability; that the infrastructure of trust would remain intact. The invisible systems that ensure medical records are accurate, prescriptions are safe, and drug interactions are flagged. Care coordination is seamless and has vanished overnight, leaving a healthcare system that operates on improvisation and hope.
The forensic story, as reconstructed by investigators, traced the breach back to February 2024. A contractor's laptop was infected via a malicious link in what appears to have been a targeted spear-phishing attack. From that single point of compromise, the attackers pivoted into Ascension's Active Directory environment, the crown jewel system that controls access across the entire enterprise. Once inside, they leveraged a technique called "Kerberoasting," which exploits weaknesses in how Windows service accounts are configured, to systematically escalate their privileges. Over the course of three months, they moved laterally through the network, exfiltrating data and positioning themselves for maximum disruption.
Most reporting has focused on this technical surface: the weak passwords that should have been stronger, the legacy RC4 ciphers that should have been disabled, the excessive privileges that should have been restricted. These details read like a cybersecurity checklist from 2015, familiar to anyone who has read breach reports over the past decade. However, treating these as causes is akin to blaming a building collapse on the specific beam that failed, while overlooking the absent foundation that made the collapse inevitable. Those technical failures are not causes; they are the visible symptoms of a deeper structural failure in how trust is manufactured, governed, and maintained as an enterprise asset.
The Frame of Negligence
Breaches do not emerge from entropy or bad luck. They are manufactured through specific management choices, resource allocation decisions, and governance frameworks that prioritize short-term efficiency over long-term resilience. Ascension's failure was not about a single weak password discovered by an unlucky attacker. It was about the systematic absence of what Trust Value Management (TVM) describes as a Trust Factory, a working architecture designed to manufacture trust as a measurable, renewable asset.
The Trust Factory is not a metaphor. It is a specific operational framework comprising six core programs and fifty-nine distinct subprocesses, each with a defined purpose, inputs, outputs, cadence, and quality gates. When properly implemented and resourced, these subprocesses produce certified trust artifacts that can be assembled into Trust Stories, tested against real-world conditions, renewed on predetermined schedules, and presented as auditable evidence to buyers, auditors, regulators, and other stakeholders who depend on organizational trustworthiness.
Ascension did not run such a factory. The absence of its outputs, the missing artifacts, the ungoverned processes, the uncertified claims, is precisely what allowed a single contractor's laptop to collapse a healthcare empire serving millions of patients across multiple states.
The forensic story of the breach can be mapped directly to Trust Factory subprocesses that were either ungoverned, under-resourced, or simply absent:
Identity & Access Governance: Service accounts in any properly governed environment should be managed through randomized, automatically rotated Managed Service Accounts with clearly defined lifespans and purposes. Certified artifacts from this subprocess would have documented the business purpose of each service account, the cadence of credential rotation, and the renewal process for each identity. Each artifact would have maintained a clear lineage back to raw inputs: password policies, rotation logs, credential inventories, access reviews, and exception approvals. All of this would have been certified by Trust Quality processes that verify completeness and accuracy.
No such artifacts existed at Ascension. Instead, static service accounts with weak, manually managed passwords became the single thread that attackers pulled to unravel the entire enterprise. The Kerberoasting attack, which proved so devastating, would have been structurally impossible against properly governed Managed Service Accounts.
Network Protection & Segmentation: In a functioning trust factory, network segmentation artifacts provide renewable proof that sensitive systems are isolated from general network traffic, with clear documentation showing how access boundaries are maintained, tested, and renewed. These artifacts demonstrate that a compromise in one network segment cannot cascade into mission-critical systems. The segregation of contractor devices from core infrastructure should not just be implemented but also be provable through certified outputs.
At Ascension, a contractor's laptop was able to pivot directly into Active Directory, the most sensitive system in the entire enterprise. This direct pathway should have been structurally impossible, as it would have been prevented by multiple layers of segmentation that would have contained the breach at the network edge. The absence of segmentation artifacts is not merely a technical oversight or budget constraint. It represents a fundamental governance failure: executives had not invested in the subprocess designed to produce and maintain these critical boundaries.
Third-Party Governance: The attack vector was a contractor device, a third-party endpoint that somehow gained sufficient network access to compromise core infrastructure. A functioning Third-Party Governance subprocess would have produced a continuous stream of onboarding artifacts, including due diligence records that document security assessments of contractor devices, contractual controls that specify security requirements, renewal checks that verify ongoing compliance, and monitoring outputs that provide continuous proof that third parties meet security requirements throughout their engagement.
These artifacts would have been renewable on predetermined schedules, auditable by compliance teams, and tied to specific business justifications for third-party access. None appears to have existed at Ascension. The compromise was therefore not just a technical failure but a systemic governance breakdown: the enterprise had no mechanism to prove that third-party access would not compromise patient care, and no process to verify that assurance continuously.
Logging, Monitoring & Detection: Perhaps most damaging of all, the attackers remained undetected for three months while they systematically compromised systems and exfiltrated data. In a properly running trust factory, the Logging, Monitoring & Detection subprocess would have continuously shipped renewable monitoring artifacts: logs tied directly to specific threat exposures, detection coverage that has been attested and tested, and renewal gates that must be passed to maintain certification.
The fact that none of these monitoring processes triggered, and that the breach remained invisible for months, tells us not just that technical monitoring failed, but that the enterprise lacked the foundational output objects that demonstrate monitoring sufficiency. There were no artifacts proving what was being monitored, no certification that monitoring was adequate for the threat environment, and no process for renewing these assurances as the threat landscape evolved.
Each of these subprocess failures directly maps to an absent output; a missing piece of certified evidence that should have existed to demonstrate the enterprise's management of its trust obligations. The story of Ascension is not about missed cybersecurity best practices or inadequate IT budgets. It is about a trust factory that was never built, never funded, and never governed as a critical enterprise asset.
The Missing Governors
Even the most well-designed Trust Factory subprocesses will drift toward entropy without proper governors, control systems that keep trust manufacturing tethered to reality and accountable to stakeholders. TVM defines two critical governors that were absent from Ascension's operations: the Claims Registry (CR) and the Emotional Supply Chain (ESC).
The Claims Registry is neither a compliance document nor a marketing artifact. It is a curated, versioned set of permitted claims, specific statements that an enterprise is authorized to make about its security posture, operational resilience, and trustworthiness. Each permitted claim must be bound to a lineage that traces back to specific certified artifacts, and each claim must be renewed on predetermined schedules as underlying conditions change. The CR prevents what TVM calls "presentation outrunning proof," the dangerous gap between what executives claim about their organization's trustworthiness and what they can actually demonstrate with evidence.
Suppose Ascension's executives had made claims in annual reports, regulatory filings, marketing materials, or direct patient communications that their systems were resilient, secure, or designed to ensure continuity of care. In that case, those claims should have been CR-bound and artifact-backed. Each claim should be supported by specific evidence, including segmentation artifacts that prove network isolation, identity governance artifacts that demonstrate access controls, monitoring artifacts that show detection capabilities, and business continuity artifacts that document disaster recovery procedures.
Without such a registry, executives were free to overstate organizational resilience with no structural mechanism to tether their statements to actual evidence. This is not a matter of intent or honesty; even well-intentioned executives will drift toward optimistic presentations when there are no governing forces to compel regular reconciliation between claims and evidence.
The Emotional Supply Chain ensures that assurance, the felt sense of trustworthiness, is delivered where and when decisions are made. In healthcare, this means patients must feel confident that their care will continue uninterrupted, clinicians must trust that their tools will function reliably, and administrators must believe that operational systems will support rather than hinder patient care.
The ESC operates through delivery frames that map which audiences (patients, clinicians, regulators, investors) receive which specific assurances, when those assurances are delivered, and how they are calibrated for maximum impact. Each delivery frame is resonance-tested against the target audience and bound to specific story versions that can be updated as conditions change. The goal is not generic communication but precise emotional engineering: delivering exactly the proper assurance to the right audience at the right time to support optimal decision-making.
Had Ascension been operating with ESC discipline, patients and clinicians would have received pre-positioned, renewable assurances about continuity of care that would have persisted even under operational stress. These assurances would not have been marketing promises but evidence-backed commitments tied to specific artifacts and renewable on predetermined schedules. Instead, in May 2024, when conviction and confidence were most desperately needed, the enterprise had nothing to deliver except apologies and workarounds.
Without these governors, presentation inevitably outran proof. Executives could present organizational maturity without supporting evidence. When a crisis arose, there was no emotional supply chain in place to convey assurance to the operational edge, where patient care is actually delivered.
Evidence Operations That Never Ran
The Trust Factory framework also defines a critical Evidence Operations layer, the systematic conversion of raw operational data into admissible trust artifacts. This is not simply data collection or log aggregation. It is the disciplined transformation of noise into proof.
Every enterprise generates massive quantities of raw inputs: system logs documenting access patterns, configuration files showing security settings, HR records tracking employee offboarding, vendor assessment reports, meeting minutes capturing security decisions, interview notes from incident responses, and decision rationales explaining why certain risks were accepted. But until these raw inputs are converted into certified artifacts with clear lineage, predetermined renewal cadence, and defined acceptance tests, they remain inadmissible as evidence of trustworthiness.
At Ascension, the raw data almost certainly existed somewhere in the organization. Access logs revealed credential usage patterns, network diagrams documented system architecture, policies governing third-party device access, incident response procedures, and numerous other data points. But none of this raw material was systematically converted into certified trust artifacts through Evidence Operations processes.
The difference between raw data and certified artifacts is not merely administrative. Raw data is noise; it exists, but it cannot be relied upon to prove anything specific about organizational trustworthiness. Evidence Operations transforms noise into legally and operationally admissible proof through systematic processes that verify completeness, accuracy, currency, and relevance. The absence of this conversion process means that even good raw data cannot be assembled into trust assurances when they are needed most.
In a properly functioning Trust Factory, Evidence Operations produces five distinct classes of outputs:
Certified Trust Artifacts: These are renewable, lineage-tracked evidence objects that demonstrate specific aspects of an organization's trustworthiness. Each artifact is directly tied to the threat exposures it mitigates, the business processes it protects, and the stakeholder assurances it supports. Artifacts are not static documents but living objects that are renewed, tested, and recertified as conditions change.
Trust Stories: These are shippable units that bind multiple artifacts together to address specific trust buyer needs. Unlike generic compliance reports or security assessments, Trust Stories are versioned, audience-specific narratives that are warrant-backed by certified artifacts and designed for maximum persuasive impact with particular stakeholder groups.
Trust Value Indicators (TVIs): These are finance-legible metrics that translate trust manufacturing into business impact measurements. TVIs show how trust investments affect revenue generation, customer retention, deal velocity, borrowing costs, and enterprise valuation. They make trust manufacturing visible to CFOs and boards who must allocate resources based on measurable returns.
Durable Records: These are sealed assurance objects explicitly designed for regulatory and audit use. Unlike artifacts that are renewed on operational schedules, Durable Records are time-stamped, cryptographically sealed evidence packages that can prove organizational state at specific moments in time. They are designed to survive legal discovery and regulatory examination while maintaining their evidentiary integrity.
Trust Tokens: These are encoded units of trustworthiness that can be exchanged across ecosystem boundaries. Trust Tokens allow organizations to portably demonstrate specific capabilities to partners, customers, and regulators without exposing sensitive operational details.
Ascension shipped none of these outputs. Instead, it resulted in system downtime, operational disruption, regulatory investigations, patient lawsuits, and substantial shareholder losses. The absence of Evidence Operations did not just make the breach more likely; it made effective crisis response structurally impossible.
Why This Keeps Happening
The natural question at this point is why we continue to see the same catastrophic pattern repeated across industries: devastating breaches that could have been prevented, massive stakeholder losses that were entirely predictable, and yet no systematic change in how boards and executives approach trust manufacturing. The pattern is so consistent, Equifax, Target, Anthem, SolarWinds, Colonial Pipeline, and now Ascension, that it suggests something more profound than individual organizational failures.
The answer is not mysterious, but it is uncomfortable: software, and by extension the enterprises that anchor their operations to software systems, operate inside a carefully constructed liability-free zone. Over the past three decades, five overlapping layers of legal immunity have been systematically built to shield software operators from the consequences of their negligence, creating systemic incentives that favor extraction and speed over resilience and accountability.
Product Liability Exemptions: Unlike aviation, pharmaceuticals, automotive, or most other industries that produce products on which lives depend, software is largely exempt from strict liability standards for defects. If a commercial aircraft crashes due to faulty engineering, liability attaches swiftly and comprehensively to manufacturers, suppliers, and operators. If a pharmaceutical drug causes harm due to inadequate testing, liability flows backward through the entire development and approval chain. If an automobile's brakes fail due to poor design, manufacturers face both legal consequences and market punishment.
Software faces no such accountability regime. When hospital software systems collapse and patients die, when financial software fails and destroys retirement savings, when infrastructure software is compromised and disrupts essential services, liability typically does not attach to the software vendors, system integrators, or platform operators who created the conditions for failure.
Safe Harbors and Platform Immunities: Legal frameworks like Section 230 and similar regimes worldwide shield technology platforms from the consequences of foreseeable harms mediated by their code. Even when failures are entirely predictable based on system design choices, operators remain immunized from liability. These protections were crafted initially for narrow circumstances but have expanded into comprehensive shields against accountability.
Contractual Waivers: The software industry has normalized shrink-wrap and click-wrap agreements that systematically disclaim responsibility for software defects, security failures, and operational disruptions. Hospitals, enterprises, and individual users routinely purchase and deploy systems "as-is" with no meaningful recourse when inevitable failures occur. These contractual structures ensure that operational risk is pushed downward to end users while financial benefits flow upward to vendors.
Governance by Design: Software vendors systematically push risk downward by encoding defaults that prioritize backward compatibility and ease of adoption over security and resilience. Microsoft's decision to allow Active Directory systems to fall back to legacy RC4 encryption was not a technical inevitability imposed by the laws of computing. It was a governance decision that prioritized vendor convenience over customer security. These choices are made deliberately, with full knowledge that they create exploitable vulnerabilities, but vendors face no meaningful consequences for these decisions.
Political Entrenchment: Decades of sustained lobbying have successfully blocked the emergence of duty-of-care standards that would impose meaningful liability on software vendors and operators. The technology industry has invested enormous resources in ensuring that regulatory frameworks remain fragmented, under-resourced, and largely toothless. This political entrenchment ensures that even obviously necessary reforms face years of opposition and delay.
Together, these five layers create what can only be described as immunity by design. Executives who preside over software-mediated disasters—whether cybersecurity breaches, operational outages, or platform-enabled harms—are systematically insulated from personal and corporate consequences. The costs of their negligence are externalized to patients, customers, shareholders, and the general public, while the benefits of risk-taking accrue to the operators who make those choices.
This is why breaches recur with such predictable regularity. The systemic incentive structure rewards executives who extract short-term value by deferring investment in trust operations, because the inevitable downside never lands on the decision-makers who created the conditions for failure. Until this immunity lattice is punctured by legal precedent or regulatory intervention, rational executives will continue to trade away trust value because it is economically optimal to do so within the current regime.
The Caremark Horizon
Corporate governance scholars have long wondered why cybersecurity has not yet experienced its "Caremark moment"—the legal precedent that would extend traditional fiduciary liability principles to directors who fail to implement adequate trust and security systems. The reference is to In re Caremark International Inc. Derivative Litigation. This 1996 Delaware case established that corporate directors can be held personally liable for failing to implement adequate compliance and monitoring systems.
The Caremark standard creates what lawyers call "oversight liability"—directors cannot simply delegate compliance to management and walk away. They have a fiduciary duty to ensure that adequate systems are in place to monitor legal compliance and operational risk. When those systems are absent or inadequate, and harm results, directors can face personal liability even if they had no direct involvement in the underlying misconduct.
Cybersecurity and trust manufacturing seem like natural applications for Caremark principles. Directors who approve budgets that systematically under-fund trust operations, who fail to ensure adequate oversight systems exist, or who allow presentation to outrun proof in shareholder communications would appear to be violating basic fiduciary duties. The harm from these failures is often massive and entirely foreseeable.
Yet the precedent has not emerged. Courts have been reluctant to extend Caremark liability to cybersecurity failures, typically finding that directors satisfied their oversight obligations by receiving periodic briefings from management or hiring external consultants to conduct assessments. The bar for proving oversight liability remains exceptionally high, and directors continue to enjoy broad protection through business judgment rule presumptions and comprehensive insurance coverage.
But legal precedents do not emerge until the proper case aligns with the right judicial climate. The elements for a cybersecurity Caremark breakthrough are increasingly falling into place: more frequent and severe breaches, more precise documentation of board-level negligence, growing regulatory pressure, and mounting evidence that existing oversight approaches are systemically inadequate.
When that precedent finally emerges—and it will—the excuses that currently protect directors will evaporate instantly. "No one else has been sued for this." "Everyone in the industry does it this way." "Legal counsel signed off on our approach." "We followed industry best practices." These rationalizations will become legally irrelevant once courts establish that directors have personal fiduciary obligations that cannot be satisfied through delegation to management or outsourcing to consultants.
The first successful Caremark claim for cybersecurity oversight failure will create a cascade effect. Directors across industries will suddenly face personal liability for trust manufacturing failures they previously considered management problems. Insurance coverage will become more expensive and restrictive. Board compensation will need to reflect newly recognized personal risks. The entire incentive structure of corporate governance will shift overnight.
Trust Value Management is the pre-emptive response to this inevitable legal evolution. Organizations that implement comprehensive Trust Factories before the precedent arrives will be positioned to demonstrate that their directors satisfied their fiduciary obligations through systematic oversight of trust manufacturing. Organizations that wait for the precedent will find themselves defending inadequate systems after liability has already attached.
Job Descriptions as Fiduciary Evidence
There is another legal inevitability embedded in current corporate structures that makes the emergence of personal liability even more predictable. Every enterprise routinely publishes its own detailed map of responsibility and authority, known as job descriptions. These documents, created and maintained by the organization itself, provide prima facie evidence of who owns which exposures and the authority granted to manage those risks.
A Chief Information Security Officer job description typically outlines in plain language that this individual is responsible for mitigating mission-critical cybersecurity exposures across the enterprise. It describes their scope of responsibility, budget authority, reporting relationships, and specific areas of accountability. In litigation, these documents become self-authenticating artifacts—they require no subpoena to obtain, cannot be claimed under attorney-client privilege, and contain no ambiguity about organizational intent. They are the enterprise speaking in its own voice about how it has allocated fiduciary responsibility.
Once courts begin connecting job descriptions to fiduciary obligations, the path to personal liability becomes remarkably straightforward. A breach occurs that causes significant harm. The relevant job description clearly shows that responsibility for preventing such breaches was assigned to specific individuals with defined authority and resources. Those individuals either failed to implement adequate systems or were unable to escalate inadequacies to board-level oversight. The causal chain from assigned responsibility to demonstrable harm becomes legally evident.
This creates what legal scholars call "inevitable liability"—not because of any change in law, but because existing legal principles will eventually be applied to organizational structures that were built without considering their legal implications. The job descriptions written to clarify management accountability will serve as evidence to establish personal liability.
The irony is particularly sharp. The same managerial revolution that systematically stripped professional advisors of decision-making authority also published detailed documents assigning them responsibility for managing risks they cannot control. Responsibility without authority is not just an organizational injustice—it is the architectural foundation for inevitable legal liability. The first case that successfully aligns "job description assignment," "breach occurrence," and "demonstrable harm" will create a precedent that transforms corporate governance across industries.
The Pied Piper Posture
A common defensive response to Trust Value Management implementation is what can be called the "Pied Piper posture": No one else has been sued for this. The market hasn't punished inadequate trust manufacturing. Other enterprises in our industry don't implement comprehensive Trust Factories, so why should we bear the cost and complexity of doing so?
This herd mentality is precisely how systemic risk propagates across entire industries. Each operator looks laterally at peer behavior rather than forward at legal and market evolution. Each assumes that if competitors have not yet faced consequences for inadequate trust manufacturing, they can safely continue with status quo approaches. The reasoning appears rational within a narrow time horizon: if others are not being punished, punishment must not be a significant risk.
But this reasoning is brittle in precisely the way that creates systemic catastrophic risk. It remains in effect only until the first precedent-setting case or regulatory action. Once that threshold is crossed, the fact that "everyone was doing it" becomes legally and economically irrelevant. Tobacco companies operated under herd protection until the first successful liability lawsuit pierced their collective shield. Asbestos manufacturers assumed their shared practices provided safety until courts shifted liability standards across the industry simultaneously. Energy companies believed their common environmental practices were legally protected until regulatory frameworks evolved to impose retroactive liability.
The accounting industry provides an even more precise parallel. For decades, accounting firms believed that aggressive interpretations of financial reporting standards were protected by industry-wide adoption of similar practices. Arthur Andersen's partners assumed their approach to Enron was defensible because comparable techniques were widely used across their industry. That assumption held until it became invalid. When legal and regulatory pressure finally arrived, the fact that "everyone was doing it" provided no protection whatsoever. Herd following became evidence of industry-wide negligence rather than a defense against individual liability.
A similar dynamic is emerging across the cybersecurity and trust manufacturing landscape. Each enterprise that defers comprehensive Trust Factory implementation because competitors have not yet faced consequences is participating in the same collective delusion that has preceded every major shift in liability standards. The rationalization works perfectly until the first precedent arrives, at which point it becomes entirely irrelevant.
TVM is designed as a preemptive correction rather than a post-litigation scramble. Organizations that implement comprehensive trust manufacturing before liability standards shift will be positioned to demonstrate that they were managing risks that their competitors ignored. Organizations that wait for legal pressure will find themselves implementing expensive remediation after liability has already attached and their competitive position has been compromised.
The Double Failure of Liability-Free Software
Ascension is not a software vendor, but by anchoring its patient care operations and administrative systems to Microsoft Active Directory and other commercial software platforms, it voluntarily stepped inside the same liability-free zone that protects those vendors from the consequences of their design decisions. This created a double failure of accountability that made a catastrophic breach virtually inevitable.
For Microsoft and other enterprise software vendors, immunity from liability is not an accident, but the result of decades of deliberate legal and political strategy. Their business models are explicitly designed to externalize operational risk while capturing financial returns. When Active Directory systems are configured with dangerous defaults that prioritize backward compatibility over security, when legacy encryption protocols are preserved for vendor convenience despite known vulnerabilities, when authentication systems are designed with exploitable weaknesses that have been publicly documented for years, these are not oversights. They are predictable outcomes of incentive systems that reward rapid deployment and market share growth while externalizing security costs to customers.
Microsoft's negligence is entirely rational within its operating environment because its risks are systematically externalized. When healthcare systems collapse due to Active Directory compromises, financial institutions are breached through Windows authentication flaws, and critical infrastructure is disrupted by vulnerabilities in Microsoft's ecosystem, the costs are borne by customers and their stakeholders. At the same time, Microsoft maintains its market position and remains profitable.
For Ascension, the negligence operates at a different level. Still, it is equally damning: the organizational failure to recognize that building a healthcare empire on software systems means building on a substrate explicitly designed to be liability-free. Healthcare has unique legal and ethical obligations to patients that cannot be satisfied by importing the risk management approaches of consumer technology companies. When you anchor patient care to systems that are engineered for immunity rather than accountability, you create an irreconcilable conflict between your fiduciary obligations and your operational foundations.
Hospital executives who deploy software systems without accounting for their immunity-by-design characteristics are essentially gambling with patient lives using dice that are loaded against accountability. They are importing into healthcare—an industry where negligence traditionally carries severe legal and professional consequences—the operational risk profile of an industry where negligence is systematically shielded from consequences.
This is the invisible contract at the heart of every major breach: patients and shareholders absorb the costs of software failures, vendors remain protected by immunity lattices, and executives trade away trust value in exchange for short-term operational efficiency. The cycle repeats because it is rational for each participant within their individual incentive structure, even though it is collectively destructive for the ecosystem as a whole.
When patients' lives depend on systems that are engineered for vendor immunity rather than operational accountability, disaster is not an unfortunate accident. It is a predictable feature of the operational environment. The only systemic escape is to manufacture trust as an asset that can survive within immunity-protected ecosystems, which is precisely what Trust Value Management is designed to accomplish.
The Cost in Trust Value
The impact of a breach is typically quantified in terms of stolen records, system downtime, regulatory fines, and litigation costs. These measurements capture direct expenses but overlook the real economic damage: the erosion of trust value as a measurable asset. Trust value encompasses all the economic benefits that flow from stakeholder confidence in organizational reliability, and its destruction creates cascading financial consequences that persist long after technical systems are restored.
At the human level, the Ascension breach destroyed multiple layers of trust simultaneously:
Patient Trust: Patients reasonably expected that choosing Ascension for their healthcare meant their care would continue uninterrupted, regardless of operational challenges. They trusted that the hospital's systems were designed with sufficient redundancy and resilience to maintain continuity even under stress. The breach revealed that this basic assumption was false—their medical records could vanish, their care could be disrupted, and their personal information could be stolen due to infrastructure failures entirely outside their control.
Clinician Trust: Doctors, nurses, and other healthcare professionals trusted that their digital tools would function reliably when patients’ lives depended on them. They built their professional workflows around the assumption that electronic health records, prescription systems, laboratory interfaces, and care coordination platforms would remain available. The breach forced them to discover that this professional infrastructure was far more fragile than they had been led to believe.
Regulatory Trust: Healthcare regulators trusted that Ascension was meeting its duty-of-care obligations through adequate systems and controls. They assumed that an organization operating 140 hospitals had implemented sufficient safeguards to protect patient information and ensure continuity of care. The breach demonstrated that these regulatory assumptions were unfounded.
Investor Trust: Shareholders and lenders trusted that Ascension's enterprise valuation was defensible, based on sustainable competitive advantages and adequate risk management. They assumed that the organization's digital infrastructure was an asset that enhanced operational efficiency rather than a liability that could destroy value overnight. The breach revealed that years of ostensible digital transformation had actually created concentrated risk rather than distributed resilience.
But trust value destruction is not merely emotional or reputational damage. Trust has measurable financial consequences that can be quantified using Trust Value Management methodologies:
Trust Contribution Margin (TCM) Collapse: TCM measures the incremental profit margin created when organizational trustworthiness accelerates deal negotiations, expands customer relationships, and reduces churn rates. When trust is intact, customers buy faster, buy more, and stay longer because their confidence in organizational reliability reduces their perceived risk and transaction costs. When breaches occur, TCM collapses across all business lines. Prospective patients delay elective procedures, existing patients switch to competitors, referral relationships deteriorate, and partnership negotiations stall. Revenue per relationship declines while the cost of customer acquisition increases.
Trust-Discounted Weighted Average Cost of Capital (WACC) Increase: Trust destruction increases an organization's cost of capital as lenders and investors price breach risk into their required returns. Credit ratings agencies downgrade organizations that have demonstrated inadequate risk management, forcing higher interest rates on debt financing. Equity investors demand higher returns to compensate for demonstrated operational instability. Capital becomes both more expensive and more difficult to obtain, constraining growth and forcing less efficient capital allocation decisions.
Trust-Assisted Average Contract Value (ACV) Lift Reversal: When trust is intact, it accelerates procurement processes and shortens due diligence cycles, allowing organizations to capture higher contract values with lower sales costs. Trust artifacts that can be produced on demand reduce buyer risk and eliminate costly verification processes. When trust is destroyed, this process reverses. Sales cycles lengthen as buyers implement additional due diligence requirements. Contract values decrease as buyers demand discounts to compensate for perceived risk. Customer lifetime value erodes as relationships require more intensive management and face higher churn probability.
Portfolio Valuation Drag: Trust destruction does not remain isolated within the business units directly affected by a breach. It creates a valuation discount that applies across the entire enterprise portfolio, thereby reducing the economic value of all business lines and diminishing the organization's resilience for future acquisitions, partnerships, or public offerings. This portfolio effect can persist for years after technical systems have been restored and regulatory investigations have concluded.
These financial consequences are not theoretical. They can be measured, tracked, and projected using the same financial methodologies applied to other enterprise assets. Trust Value Management provides the analytical framework for quantifying trust as an asset before it is destroyed and measuring the cost of its destruction when prevention fails.
This is the actual economic cost of failing to manufacture trust systematically. The numbers cited in breach reports—stolen records, remediation costs, regulatory fines—are accounting artifacts that miss the real destruction of enterprise value. The collapse is in trust capital, which takes years to rebuild and may never fully recover. Organizations that understand this dynamic invest in Trust Factories as financial assets. Organizations that ignore it treat trust as an externality until the market corrects their accounting.
The Wrong Questions
Most breach analysis focuses on forensic details that miss the systemic causality. The questions that dominate post-incident reporting are precisely the wrong questions:
Why was RC4 encryption still enabled on legacy systems?
Why was the service account password so weak and static?
Why did the contractor have network access to critical systems?
Why did monitoring systems fail to detect lateral movement for three months?
Why were backups inadequate for rapid recovery?
These questions treat symptoms as causes and technical details as explanatory. They generate answers that lead to tactical fixes—such as updating passwords, disabling legacy protocols, segmenting networks, and improving monitoring—that often miss the structural reasons why these technical failures were inevitable.
The right questions start from a fundamentally different premise: that breaches are manufactured through absent systems rather than present failures:
Why were no identity governance artifacts produced and certified on a renewable schedule? This question shifts focus from specific password weaknesses to the absent subprocess that should have been manufacturing proof of identity management adequacy. It highlights governance gaps rather than technical ones.
Why were no network segmentation artifacts maintained with verified cadence? This reframes network architecture from a technical implementation to a governance asset that requires continuous certification and renewal. It asks why the enterprise had no proof of segmentation adequacy rather than why specific segments failed.
Why were third-party onboarding artifacts absent from enterprise governance? This shifts the analysis from the specific contractor device to the missing subprocess that should have been continuously certifying that all third-party access met security requirements. It asks why the enterprise had no proof that contractor access was safe, rather than why this particular contractor caused problems.
Why did executives allow presentation to outrun proof across all stakeholder communications? This question cuts to the heart of governance failure: the systematic gap between what leaders claimed about organizational resilience and what they could actually demonstrate with certified evidence. It points toward absent governors rather than inadequate technical controls.
Who signed off on a governance model that manufactured extraction rather than trust, and why were they not held accountable when the inevitable consequences materialized? This is the ultimate question that connects breach causality to executive decision-making and board oversight. It asks why organizational incentives rewarded short-term efficiency over long-term resilience, and why the people who created those incentives faced no consequences when their choices destroyed stakeholder value.
Until journalists, regulators, and boards start asking these structural questions instead of focusing on technical forensics, every breach report will continue to be a misdirection that enables the same failures to repeat across industries with predictable regularity.
Why Lawyers Don't Stop It
At this point, a natural objection arises from executives who have received cybersecurity advice from their legal departments: If the structural problems described here are so obvious and the liability risks so clear, why haven't my lawyers prevented me from making these mistakes? Why hasn't legal counsel stopped me from building trust-free organizations that create foreseeable harm?
The answer reveals one of the most important but least understood aspects of modern corporate governance: lawyers have been systematically stripped of the authority to govern. In contemporary enterprises, legal departments advise but do not decide. Their recommendations are routinely overridden when they conflict with executive prerogative, revenue targets, or shareholder pressure. This transformation represents a profound shift from earlier eras of corporate governance, and it explains why legal warnings about cybersecurity risks are consistently ignored until after disasters occur.
Corporate counsel has been effectively deprofessionalized over the past several decades. In earlier periods of American business history, general counsel could exercise significant veto power over decisions that posed an existential risk to the enterprise. They were partners in governance rather than service providers. Legal departments were profit centers that protected enterprise value by preventing catastrophic decisions, and their authority to say "no" was respected even when it conflicted with short-term business objectives.
Today, legal departments are measured as cost centers optimized for efficiency and responsiveness rather than authority and independent judgment. They draft contracts that minimize vendor liability while maximizing enterprise risk. They negotiate indemnifications that protect suppliers while exposing their own organizations to risk. They provide compliance advice that checks regulatory boxes while ignoring operational reality. But they do not stop decisions that create foreseeable harm, because they have been structurally prevented from exercising that authority.
The postmortem documentation from every major corporate disaster reveals the same pattern: legal warned us, but we proceeded anyway. Internal emails and meeting minutes consistently show that lawyers identified risks, recommended against the adoption of dangerous technologies, flagged potential exposures, and outlined worst-case scenarios with remarkable accuracy. But executive teams, under pressure from boards focused on quarterly results and market performance, systematically overrode legal advice that conflicted with business objectives.
This is not a failure of legal reasoning or professional competence. It is a failure of structural incentives within the liability-free zone that governs software-dependent enterprises. When executives face no personal consequences for ignoring legal advice about cybersecurity risks, and when shareholders reward short-term efficiency gains regardless of long-term risk accumulation, rational executives will consistently override legal warnings that impose costs without generating immediate returns.
The deprofessionalization of corporate counsel is not accidental. It is the predictable result of managerial ideologies that subordinated professional expertise to executive authority throughout the 20th century. Legal departments were deliberately restructured from independent governance partners into internal service providers, and their transformation mirrors similar changes in how enterprises treat other professional advisors, including auditors, risk managers, compliance officers, and cybersecurity professionals.
The Glass Ceiling of the Trusted Advisor
This dynamic reflects what can be called "the glass ceiling of the trusted advisor," a structural limitation that keeps professional experts close enough to see organizational risks but never close enough to prevent them. Over the past century, managerial practice has systematically enclosed lawyers, auditors, risk professionals, and security experts within advisory roles that provide proximity without power.
The enclosure was deliberate and ideological. Early 20th-century management theory explicitly argued that professional expertise should be "on tap, not on top," meaning it should be available for consultation but never empowered to override executive judgment. This managerial revolution subordinated technical knowledge to executive prerogative as a matter of organizational principle. Experts were repositioned as advisors whose recommendations could be accepted or rejected based on executive discretion rather than professional standards.
The result is a carefully constructed glass ceiling that provides trusted advisors with remarkable visibility into organizational risks while systematically preventing them from taking action to address those risks. Cybersecurity professionals can see that network architectures create foreseeable vulnerabilities, but they cannot override executive decisions to defer expensive remediation. Risk managers can identify that business strategies create unacceptable exposures, but they cannot prevent executives from pursuing those strategies when they generate short-term revenue.
Legal counsel can predict with considerable accuracy that specific technology deployments will create liability, but they cannot prevent executives from deploying those technologies when competitive pressure demands rapid implementation. The advisors are close enough to understand the risks, close enough to document their warnings, close enough to say "I told you so" after disasters occur, but never close enough to actually prevent the disasters from happening.
Their proximity to power is systematically mistaken for actual power, both by the advisors themselves and by external observers who wonder why professional experts failed to prevent foreseeable disasters. But proximity is not power; it is a form of containment that provides the appearance of professional input while preserving executive autonomy to ignore that input when convenient.
This structural arrangement serves executive interests by providing liability protection ("We consulted with experts") while preserving decision-making authority ("But we retained the right to make final business judgments"). It allows executives to claim they followed professional advice when convenient and to override that advice when it conflicts with business objectives, all while maintaining the appearance of responsible governance.
The Liability Lattice
The glass ceiling of trusted advisors bookends the liability-free zone that protects software-dependent enterprises from accountability. Upstream, professional experts are systematically disempowered and prevented from exercising governance authority. Downstream, legal frameworks typically shield executives from personal liability when their decisions result in foreseeable harm. Together, these create a closed system where warnings are ignored, risks are externalized, and decision-makers remain insulated from consequences.
This explains why cybersecurity breaches recur with such predictable regularity across industries and enterprise types. It is not that professional experts fail to identify risks or provide adequate warnings. It is not that technical solutions are unavailable or prohibitively expensive. It is not even that executives are unaware of the potential consequences of their decisions.
The pattern persists because experts are structurally prevented from governing, while executives who ignore expert advice face no meaningful accountability when their decisions destroy stakeholder value. Professional competence is systematically subordinated to executive prerogative within a legal framework that shields decision-makers from the consequences of their negligence.
This arrangement is not sustainable indefinitely. Legal precedents eventually evolve to pierce immunity when the social costs of that immunity become too large to ignore. Regulatory frameworks eventually adapt to impose accountability when market failures become too widespread to tolerate. Professional standards eventually reassert themselves when the gap between expertise and authority creates too much systemic risk.
Trust Value Management is designed to operate effectively within current liability structures while positioning organizations to thrive when those structures inevitably evolve toward greater accountability and transparency. TVM does not wait for legal precedents or regulatory changes to occur. It manufactures trust as an asset that creates a competitive advantage regardless of liability frameworks, while simultaneously preparing organizations to demonstrate adequate governance when accountability standards eventually shift.
The Fiduciary Inevitability
There is one more structural element that makes the evolution toward greater executive accountability essentially inevitable: the documentary evidence that enterprises create and maintain about their own governance structures. Every organization publishes detailed job descriptions that assign specific responsibilities to named individuals. These documents, created by the enterprise itself, provide prima facie evidence of who owns which risks and what authority they were given to manage those exposures.
A Chief Information Security Officer job description typically specifies in plain language that this individual is responsible for protecting enterprise information assets, ensuring system availability, managing cybersecurity risks, and coordinating incident response. It documents their reporting relationships, budget authority, staffing resources, and specific areas of accountability. These documents are not privileged communications or confidential strategy papers. They are public artifacts that the organization uses to communicate its governance structure to employees, regulators, customers, and investors.
In litigation, job descriptions become self-authenticating evidence that requires no subpoena to obtain and cannot be claimed under the attorney-client privilege. They represent the enterprise speaking in its own voice about how it has allocated responsibility and authority. When breaches occur that cause significant harm, these documents provide clear documentary evidence of who was assigned to prevent such breaches and what resources they were given to fulfill those responsibilities.
This creates what legal scholars recognize as "inevitable liability," not because of changes in law, but because existing legal principles will eventually be applied to organizational structures that were designed without considering their legal implications. The job descriptions written to clarify reporting relationships and performance expectations will serve as evidence to establish personal accountability when courts ultimately extend fiduciary liability to cybersecurity governance.
The timing of this legal evolution is uncertain, but its direction is not. Courts are increasingly willing to hold directors and officers personally liable for governance failures in areas where they have clear fiduciary duties. The gap between assigned responsibility and actual authority that characterizes most cybersecurity roles is precisely the kind of structural inadequacy that courts identify as fiduciary breach.
The first successful case that connects "job description assignment" + "breach occurrence" + "demonstrable harm" + "governance inadequacy" will create a precedent that transforms corporate accountability across industries. Directors and officers who believe they have successfully delegated cybersecurity risk to subordinates will discover that delegation without adequate oversight and resource allocation constitutes a fiduciary breach, rather than effective risk management.
Conclusion
A contractor's laptop did not cause the Ascension breach; it was not due to a malicious link, Kerberoasting techniques, or the use of RC4 encryption. Weak passwords, legacy protocols, or insufficient monitoring did not cause it. These technical details are symptoms of a deeper structural failure: the systematic absence of trust in manufacturing as a governed enterprise asset.
Ascension's executives chose to build a healthcare empire dependent on software systems without implementing the governance structures necessary to manufacture trust within those systems. They allowed presentation to outrun proof across all stakeholder communications. They operated without certified artifacts, without governors, without evidence operations, and without renewable outputs that could demonstrate trustworthiness to patients, clinicians, regulators, and investors when demonstration was most needed.
This pattern of absent trust manufacturing is not unique to Ascension or the healthcare industry. Every catastrophic breach of the past decade shares the same structural invariant: organizations trade away trust value in exchange for short-term operational efficiency, and when pressure arrives, they collapse because they have no trust assets to deploy. The technical details vary—different attack vectors, different vulnerabilities, different business contexts—but the governance failure is identical.
As long as breach reporting focuses on technical forensics rather than structural causality, the public will continue to be told a fundamentally misleading story. Cybersecurity failures will be framed as inevitable accidents caused by sophisticated attackers and technical complexity, rather than predictable consequences of governance choices that prioritize extraction over resilience.
The reality is that breaches are manufactured absences of trust, sustained by an immunity lattice that keeps executives insulated from accountability while patients, customers, and shareholders bear the costs. This system persists because it is economically rational for decision-makers who face no personal consequences when their choices destroy stakeholder value.
Until legal precedents pierce that immunity lattice—until courts extend fiduciary liability to directors who fail to implement adequate trust manufacturing systems—systemic incentives will not change. Executives will continue to trade away trust because it is rational to do so within a liability-free environment. Patients, shareholders, regulators, and the public will continue to bear the costs of decisions they did not make and risks they did not choose to accept.
Trust Value Management represents the only structural escape from this cycle. TVM does not merely prevent breaches through better technical controls or compliance processes. It manufactures trust as a measurable, renewable asset that creates competitive advantage while positioning organizations to demonstrate adequate governance when accountability standards inevitably evolve.
TVM realigns incentives so that trust manufacturing becomes more profitable than trust extraction. It produces certified artifacts that can withstand regulatory scrutiny and legal discovery. It creates governors that tether executive presentations to verifiable proof. It establishes evidence operations that convert raw data into admissible trust assets. Most importantly, it breaks the century-long enclosure of professional expertise within advisory roles by giving trusted advisors direct authority over financial levers that executives cannot ignore.
This transformation does not wait for regulatory change or legal precedent. It operates within existing frameworks while preparing for their inevitable evolution. Organizations that implement comprehensive Trust Factories before accountability standards shift will be positioned to thrive in a more liability-conscious environment. Organizations that wait for external pressure will find themselves implementing expensive remediation after liability has attached and competitive advantage has been lost.
The path forward requires acknowledging that trust is not an externality or a compliance artifact, but a core enterprise asset that must be cultivated with the same discipline, investment, and executive attention as any other source of sustainable competitive advantage. Anything less is just waiting for the precedent that will make such manufacturing legally mandatory rather than economically optional.
In the meantime, patients will continue to enter hospitals trusting that their care will not be interrupted by preventable infrastructure failures. Clinicians will continue to depend on systems that were never designed for the reliability their professional obligations require. Regulators will continue to assume that healthcare organizations are meeting their duty-of-care commitments, despite having no systematic way to verify their fulfillment. And investors will continue to price enterprise value based on digital transformation claims that have no evidentiary foundation.
Until we manufacture trust as systematically as we manufacture products, these trusts will continue to be violated, these dependencies will continue to fail, these assumptions will continue to be false, and these valuations will continue to collapse when reality imposes its own accounting.
The breach that didn't have to happen has happened. The question is whether we will learn from its structural causality or continue to treat symptoms. At the same time, the underlying disease spreads across every sector of the economy that has become dependent on software systems designed for vendor immunity rather than stakeholder accountability.
Framing Crosswalk: Ars Technica vs. TVM Reframing
Key Displacement:
Ars frames the breach as technical; TVM frames it as a manufactured structural absence.
Ars points downward (engineers, passwords, ciphers); TVM points upward (executives, liability lattice, fiduciary obligations).
Ars recommends patching symptoms; TVM prescribes systemic re-engineering of trust as an asset.
Ars assumes software immunity as background noise; TVM makes immunity the central explanatory engine.


