The Founders @ We're Trustable - AI, BPO, CX, and Trust

The Founders @ We're Trustable - AI, BPO, CX, and Trust

The CISO as Patient-Safety Actor: Why Cybersecurity Is Now a Patient-Facing Function

When uptime, integrity, and clarity determine whether care arrives on time.

Rachel Maron's avatar
Rachel Maron
Jan 21, 2026
∙ Paid

Share

The CISO as Patient-Safety Actor: Why Cybersecurity Is Now a Patient-Facing Function

For a long time, healthcare security lived a polite lie.

The lie was that cybersecurity was an IT concern. A back-office hygiene practice. A necessary nuisance whose job was to keep auditors calm, insurers satisfied, and billing systems upright. If it occasionally annoyed clinicians or slowed workflows, well, that was the price of safety.

But here is the thing about lies in clinical environments. They do not stay abstract. They metabolize into harm.

In modern healthcare, there is no meaningful boundary between the technical and care systems. The stack is at the bedside. The network is the hallway. The EHR is the chart in the clinician’s hand while someone is scared, in pain, and half-dressed under fluorescent lights.

That means the CISO is no longer a perimeter guard. Whether they like it or not, they are a patient-safety actor.

When Systems Fail, Patients Feel Actual Harm

A ransomware attack does not “impact operations.” Between 2016 and 2021, 374 documented ransomware attacks on healthcare delivery organizations affected the protected health information of 42 million patients. During these attacks, computers and electronic health records were disabled or encrypted. Clinicians were forced to document care by hand. Appointments and surgeries were delayed or canceled. Emergency departments diverted ambulances.

In 2020, a ransomware attack at the University of Vermont Medical Center disabled chemotherapy infusion technology. A nurse compared those weeks to only one experience: working in a burn unit following the Boston Marathon bombing. The oncology department lost access to individualized EMR chemotherapy plan templates that drove nursing and pharmacy processes. Infusion visit volume dropped 52% in the first week. New patients could not access diagnostic services. The hospital created command centers to oversee ethical triage of systemic therapies.

University of Minnesota School of Public Health experts estimate that between 42 and 67 patients died as a result of ransomware attacks between 2016 and 2021. This does not include deaths covered by private insurance.

An EHR outage does not “reduce productivity.” It blocks medication reconciliation in the ER. When ransomware forced Universal Health Services offline in 2020, a clinical staff member reported having no access to any patient files, no history, nothing. Doctors could not access X-rays or CT scans. In operating rooms, anesthesia checklists disappeared. In ICUs, vital signs went unrecorded. In emergency departments, clinicians did not know patients’ allergies or the last medication administered.

A data integrity failure does not “raise compliance risk.” Hackers can alter medication details, allergy information, or diagnostic data. These changes lead to medical errors, misdiagnoses, and incorrect prescriptions. Wrong information persists in records over time, creating a continual risk of improper treatment.

Availability failures feel like abandonment.
Integrity failures feel like betrayal.
Latency feels like neglect.

Patients experience these failures viscerally. Not as headlines. Not as KPIs. As fear. As confusion. As the sickening realization that the system they trusted to hold them cannot remember who they are today.

The Spillover Effect: How One Hospital’s Breach Kills Patients Across Town

Ransomware attacks do not confine their harm to breached facilities. When hospitals go offline, neighboring facilities absorb the displaced patients. The results are measurable and lethal.

A study examining the spillover effects of hospital ransomware attacks documented what happens at unaffected facilities when nearby hospitals are compromised. Emergency medical services arrivals increased 35.2%. Patient volume increased 15.1%. Waiting room time increased 47.6%. Stroke code activations increased 74.6%. Confirmed strokes increased 113.6%. Cardiac arrest cases increased 81%.

These are not theoretical projections. These are deaths. Strokes that became permanent disability. Hearts that stopped beating while patients waited in overwhelmed emergency departments.

In rural areas with no backup capacity, the consequences are starker. When a ransomware attack cripples the only hospital for 50 miles, entire communities lose access to emergency care. Patients die in ambulances. Patients die at home, afraid to seek care that is no longer available.

This is what happens when cybersecurity is treated as a perimeter problem instead of a circulatory system. The failure propagates. The harm compounds. The bodies pile up.

Patient Safety Begins in Architecture Decisions Made Before the Crisis

We have spent decades pretending that patient safety stops at the bedside. That once the clinician does their job, the rest is infrastructure trivia. That fiction is no longer survivable.

Patient safety begins upstream, in architecture decisions made months or years before a crisis. It lives in how systems degrade under stress. It lives in whether clinicians can access what they need without improvising dangerous workarounds. It comes down to whether the hospital stays legible when something goes wrong.

In other words, patient safety begins in the security strategy.

Consider what happened during the CommonSpirit Health attack in 2022. CommonSpirit is the second-largest hospital chain in the United States. When ransomware forced their systems offline, ER nurses reverted to paper charting under crushing patient loads. The risk of transcription errors multiplied. Misplaced files became lethal possibilities. Medication mistakes bloomed in the chaos.

These failures were not inevitable. They were consequences of a security architecture that optimized for control rather than resilience under pressure. Systems are designed with no plan for graceful degradation. Controls that assumed perfect conditions. Incident response protocols that prioritized optics over clarity.

The CISO owns these outcomes, whether the org chart acknowledges it or not.

The CISO Myth Was Built for Credit Cards, Not Bodies

The modern CISO role was forged in finance. In environments where the primary asset was data, the primary harm was theft, and the primary goal was containment. Lock the doors. Harden the perimeter. Minimize exposure.

That logic does not survive first contact with a hospital.

Hospitals are porous by necessity. They are staffed by humans under pressure. They are full of legacy devices that cannot be patched, clinical workflows that cannot be paused, and moments where speed matters more than elegance. You cannot “lock it down” without locking patients out.

So what happens instead is predictable. Security is imposed as control rather than designed as care. Clinicians become reluctant adversaries. Workarounds bloom like mold in a damp basement. Passwords get taped under keyboards because the system demanded obedience, not understanding.

In a study of clinical informaticians, 60.4% identified disruption to workflows and services as a top challenge to cybersecurity implementation. First-shift nurses need to log in and out of multiple devices throughout the day across several locations. Authentication requirements insert latency at every step. Even with a 90-second latency, the cumulative impact on patient care is measurable.

Workarounds are defined in the literature as “informal temporary practices for handling exceptions to normal workflow.” In healthcare, they are clinicians’ self-created solutions to achieving a work goal within a dysfunctional system of work processes that prevent or impede that goal.

A system that clinicians must fight to use is already unsafe.

This is not a failure of training or attitude. It is a design failure rooted in a category error. We imported domination-era security models into coherence-driven care environments and then acted surprised when they shattered under load.

The Care Delivery Chain Includes You

Healthcare leaders love flow diagrams of the care journey. Intake. Triage. Diagnosis. Treatment. Discharge. Follow-up.

Security is rarely drawn on those diagrams. Which is adorable, given how many of those steps depend entirely on secure, available, trustworthy systems.

Every authentication requirement inserts latency into intake.
Every poorly tuned alert interrupts diagnosis.
Every brittle control that fails under stress fractures treatment continuity.
Every opaque outage poisons discharge confidence and follow-up adherence.

These are not side effects. They are causal contributions.

If your security control delays care, you own the outcome. If your architecture collapses silently, you own the confusion. If your incident response prioritizes optics over clarity, you own the fear.

The clinical chain does not care what your org chart says.

From Risk Posture to Clinical Posture

Most CISOs are trained to speak in the language of “risk appetite.” This is a comforting abstraction. It allows executives to pretend that risk is a negotiable commodity rather than a lived experience.

Patients do not consent to your risk appetite. They consent to care under an implied trust envelope.

They consent to care. And care has a different posture. It asks different questions. Not “what exposure can we tolerate?” but “what harm are we willing to cause?”

Translating cyber risk into clinical risk is not a communications exercise. It is a moral one. It requires admitting that uptime is not just a technical metric. It is a safety metric. That data integrity is not just accuracy, but diagnostic trust. That confidentiality breaches do not just violate the law, but rupture the emotional safety required for people to seek care at all.

Compliance will never measure this. Audits cannot feel fear. Dashboards cannot register betrayal. Only patients can.

Patients Feel Security Long Before They Understand It

Trust is not a value patients articulate. It is a condition they inhabit.

When systems work, trust is invisible. When systems fail, trust collapses instantly.

The evidence is unambiguous. After a data breach, 66% of patients report losing trust in the affected organization. 75% sever ties altogether. A study of 12 California hospitals over three years found that patients who experience a healthcare data breach are significantly less likely to visit hospitals in the following months.

Up to 40% of patients consider switching providers after a breach. Patients withhold important health information when trust in provider confidentiality erodes. They delay seeking medical care. They provide inaccurate information to protect their privacy. They avoid participating in medical research or health information exchanges.

This is not sentiment. This is signal.

Trust friction shows up as missed appointments, disengagement, second-guessing, and refusal. These are measurable outcomes that precede clinical deterioration. Ignoring them because they do not appear on a SOC report is how institutions quietly rot.

The SIGNAL methodology exists precisely to surface this kind of friction. To instrument emotional safety the same way we instrument throughput. To treat fear, confusion, and loss of confidence as early warning indicators rather than collateral damage.

In the Trust Envelope Model, these trust failures map directly to violations of structural invariants. Availability failures violate Dignity (the patient cannot access the care they need). Integrity failures violate Accountability (the system cannot be relied upon to maintain accurate information). Opaque incident response violates Agency (patients cannot understand what happened to them or what actions to take).

In healthcare, emotional safety is not a luxury. It is a prerequisite for effective care.

Case Sketches: No Villains, Just Physics

An oncology department taken offline by ransomware does not need a villain. It needs to be acknowledged that availability is care-critical. When chemotherapy infusion systems fail, patients with time-sensitive cancer treatments face survival consequences. The triage decisions required are not technical. They are ethical.

An ER slowed by EHR latency does not need a scapegoat. It needs to be recognized that performance under load is a safety requirement. When waiting times increase 47.6% at neighboring hospitals absorbing displaced patients, people die in waiting rooms.

A medical device isolated so aggressively it breaks monitoring continuity does not need a memo. It needs design humility. Network segmentation that prevents clinicians from accessing diagnostic imaging or infusion pump data creates the exact conditions for medical error that security was supposed to prevent.

These failures are not moral lapses; they are systemic consequences of treating security as a shield rather than a circulatory system. Of optimizing for control instead of coherence.

In Trust Thermodynamics terms, these systems have settled into local energy minima that optimize for compliance theater rather than actual resilience. The lattice configuration prioritizes demonstrable controls over survivable architecture. The proof of lattice maintenance is absent. When stress arrives, the system has no capacity to maintain its structure.

What Changes When the CISO Accepts the Clinical Role

Everything.

Decision criteria change. Controls are evaluated not just for strength, but for survivability under stress. The question becomes: “Does this security measure maintain its protective function when the hospital is operating under ransomware conditions, when staff are exhausted, when emergency patients are arriving faster than they can be processed?”

Escalation paths change. Incidents are communicated as care disruptions, not technical inconveniences. When Change Healthcare paid a $22 million ransom and the affiliate holding the data refused to release it, claiming he had not received his share, that was not a technical failure. That was a patient-safety crisis affecting prescription processing at 80% of U.S. pharmacies.

Accountability loops close. Security leaders remain present through recovery, not just containment. They participate in morbidity and mortality conferences. They sit in command centers during ethical triage decisions. They hear what happened to the patients whose chemotherapy was delayed.

Most importantly, the CISO stops asking, “Is this secure?” and starts asking, “Is this safe?”

That shift does not weaken security. It strengthens it. Systems designed to preserve trust under pressure are harder to exploit, harder to fracture, and easier to repair. Coherence is not softness. It is resilience.

Trust Thermodynamics teaches us that energy must be continuously supplied to maintain non-equilibrium order. The CISO who accepts their clinical role becomes an active source of that energy. They instrument trust friction. They measure emotional safety. They design for graceful degradation. They own the clinical consequences.

This is not an aspirational culture change. This is operational rigor applied to human safety instead of financial loss.

The Provocation

If your security program cannot explain how it behaves at the worst moment of someone’s life, it is not protecting healthcare. It is protecting itself.

Neprash shows annual attacks more than doubled from 2016 to 2021. From 2024 alone, 374 U.S. healthcare institutions were hit by ransomware, causing network shutdowns, offline systems, delays in critical medical procedures, and rescheduled appointments. The average cost of a healthcare data breach now exceeds $10.93 million.

But the real cost is measured in bodies. In cardiac arrests with no favorable neurological outcomes. In strokes that became permanent disability. In chemotherapy delayed past the point of treatment efficacy. In patients who stopped seeking care altogether.

Hospitals do not need more polished compliance artifacts. They need security leaders willing to own the clinical consequences of their decisions.

The CISO is already in the care pathway. The clinical chain already includes authentication latency, availability failures, integrity violations, and trust erosion. These are not abstractions. They are mechanisms of harm.

The only question is whether CISOs will act like patient-safety actors. Whether they will attend the morbidity and mortality conferences. Whether they will sit in the command center during ethical triage. Whether they will measure trust friction as rigorously as they measure patch compliance.

Whether they will accept that security failures kill patients.

The operational disruption is documented. The clinical harm is measurable. The only open question is whether leadership treats this as patient safety or as IT weather.

Next in the Series

Patient Outcomes Are Trust Outcomes: How Trust Value Management Operationalizes What Clinical Research Has Been Measuring for Decades

*this article is available as a downloadable PDF Slide Deck for paid subscribers.

User's avatar

Continue reading this post for free, courtesy of Rachel Maron.

Or purchase a paid subscription.
© 2026 Rachel Maron · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture