The Founders @ We're Trustable - AI, BPO, CX, and Trust

The Founders @ We're Trustable - AI, BPO, CX, and Trust

The CISO Myth: Perimeter Guard in a Clinical World

When control replaces coherence, patients pay the price.

Rachel Maron's avatar
Rachel Maron
Jan 26, 2026
∙ Paid

Share

The CISO Myth: Perimeter Guard in a Clinical World

Why “lock it down” thinking fails where care must flow

The modern CISO was not born in a hospital.

The role emerged in the mid-1990s, after Citibank responded to a $10 million cyber theft by Vladimir Levin, through the international funds transfer system. Steve Katz became the world’s first Chief Information Security Officer, hired with two directives: “Build the best cybersecurity department in the world” and “go out and spend time with our top international banking customers to limit the damage.”

The CISO role was forged in banks, payment networks, and financial services firms where the primary asset was transactional data, the primary threat was theft, and the primary strategy was containment. Build a perimeter. Harden it. Monitor ingress and egress. Assume that anything inside the walls is trusted and anything outside is hostile.

This model worked well enough when the worst-case failure mode was fraud.

It collapses completely when the worst-case failure mode is a patient dying because care was delayed.

Healthcare inherited the CISO role wholesale, without interrogating whether its metaphors, incentives, or success criteria made sense in an environment defined by urgency, human variability, and moral stakes. The result is a category error that keeps reproducing harm.

The Financial DNA of the CISO Role

Finance optimized security around three assumptions:

Assets are static. Money and records sit still until moved.

Workflows are predictable. Transactions follow defined paths.

Delay is tolerable. Seconds matter for fraud detection, but minutes rarely kill anyone.

Hospitals violate all three assumptions.

Patients move. Clinicians move. Devices move. Data moves continuously across wards, shifts, and contexts. Workflows are adaptive, improvisational, and deeply contingent on who is available, what is happening, and how sick someone is right now. Delay is not an inconvenience. It is a clinical variable.

The first CISO era from 1995 to 2000 focused on passwords and log-on security, perimeter defenses like firewalls and intrusion detection systems. Early CISO functions put importance on technical security controls and responses towards incidents. The role was narrow in scope and scale, born from the first instances of hacking in financial services.

Yet we still deploy security controls designed for static assets, predictable paths, and tolerable latency, then act surprised when clinicians route around them.

The Perimeter Fantasy in a Porous Environment

Hospitals do not have clean perimeters.

They have open doors, emergency intakes, visiting hours, rotating staff, contractors, students, medical devices that predate modern security models, and patients who show up unannounced in distress.

Perimeter metaphors assume a stable “inside” and a dangerous “outside.” Hospitals are all inside. Or more accurately, they are all interface.

Every login is an interface. Every alert is an interface. Every timeout is an interface. Every system outage is an interface.

Security that assumes it can simply fence off risk misunderstands where risk actually lives. In healthcare, risk lives in friction, confusion, and delay. It lives in the moment a nurse cannot log in quickly enough. It lives in the extra step that breaks a mental flow during triage. It lives in the authentication failure that forces paper notes that will later be re-entered incorrectly.

Perimeters do not protect care. They constrict it.

How Clinicians Become the Enemy

When security is imposed instead of designed, clinicians are positioned as threats.

Not intentionally. Structurally.

Controls that prioritize compliance over usability teach clinicians a quiet lesson: the system does not understand your work. Faced with that mismatch, clinicians do what humans always do in high-stakes environments. They adapt.

They share credentials. They reuse passwords. They leave sessions open. They write things down. They bypass alerts.

The evidence is overwhelming.

Multiple studies show that well over half of healthcare professionals admit to sharing credentials. 46% of employees share work-related passwords for accounts used by multiple coworkers. Password sharing is identified as one of the most common HIPAA violations. Yet healthcare staff continue to share credentials because every minute counts in critical care.

Research on workarounds to computer access in healthcare organizations documents that “workarounds are the norm, rather than the exception.” They not only go unpunished, they go unnoticed in most settings and are often taught as correct practice.

Clinicians offer their logged-in session to the next clinician as a “professional courtesy” even during security training sessions. Nurses circumvent the need to log out of computers on wheels by placing sweaters or large signs with their names on them. Staff defeat proximity sensors by putting styrofoam cups over detectors. The most junior person on staff is asked to keep pressing the space bar on everyone’s keyboard to prevent timeouts.

These behaviors are often framed as “noncompliance” or “human error.” That framing is backwards.

Workarounds are not rebellion. They are rescue.

They are clinicians trying to preserve patient care in systems that were never designed to support it under real conditions. A security program that treats these adaptations as adversarial behavior is misdiagnosing the problem.

When clinicians must choose between following security rules and treating a patient, they will choose the patient every time. Any security model that does not anticipate this is unsafe by construction.

The Latency Tax

“Lock it down” thinking imposes a latency tax.

Each control adds seconds. Each reauthentication adds cognitive load. Each poorly tuned alert steals attention. Individually, these costs look trivial. Collectively, they are measurable, compounding, and dangerous.

Studies and clinician reports show authentication overhead consuming tens of minutes per shift, and in some cases over an hour per day. One clinician mentioned that his dictation system has a 5-minute timeout that requires a password. During a 14-hour day, he spends almost 1.5 hours logging in.

Complex passwords and added authentication requirements are there to protect patient data. However, they ironically lead to decreased productivity and increased security risks. Managing and resetting complex passwords disrupts clinical workflows and consumes valuable time that would otherwise be spent providing care. This leads to burnout. 21% of nurses note too many administrative tasks such as documentation, charting, and electronic health records as a top cause of burnout.

In time-sensitive environments, latency is not evenly distributed. It hits hardest during peak stress: shift changes, emergencies, understaffed nights, system degradation. That is exactly when security controls are least forgiving and clinicians are least able to absorb friction.

This is how well-intentioned security creates systemic brittleness. The system appears controlled under ideal conditions and fails catastrophically under pressure.

A design that only works when everything is going well is not a security design. It is a demo.

Control Versus Coherence

The core failure is philosophical.

Finance-oriented security optimizes for control. Healthcare requires coherence.

Control asks: Can we constrain behavior?
Coherence asks: Can the system hold together under stress?

Control treats humans as liabilities to be managed.
Coherence treats humans as adaptive components to be supported.

Control assumes obedience produces safety.
Coherence recognizes that understanding produces safety.

In healthcare, safety emerges from alignment between people, tools, and context. Security that disrupts that alignment undermines the very thing it claims to protect.

This is why zero-trust absolutism, imported without translation, so often backfires in hospitals. Traditional security follows a “castle-and-moat” approach, trusting everything inside the network. Zero trust treats every access request as potentially hostile, requiring verification regardless of location or network status.

The concept makes sense in theory. In practice, healthcare organizations face unique challenges. Health IT leaders realize their cybersecurity strategies should not tax already time-strapped clinicians by requiring them to sign into multiple applications every day. When done well, zero-trust policies and controls should work successfully behind the scenes with no noticeable impact on clinicians.

But implementation requires careful balance. Healthcare is an industry with one of the highest numbers of connected devices. Most clinical procedures rely on several medical and IoT devices that instantly sync data to medical databases. For healthcare organizations, device functionality comes first. Safety comes second. All devices must work.

Zero trust in a care environment becomes zero flow without careful implementation. Zero flow becomes zero safety.

The difference between implementing zero trust in a healthcare setting versus other industries is that instead of just protecting devices and data, the goal of clinical zero trust is also to protect the physical workflows of care delivery, including the people and processes responsible. Healthcare organizations will likely operate in a hybrid zero-trust/perimeter-based mode indefinitely while modernizing their infrastructure.

The Signal You Cannot Ignore

Here is the signal that matters more than any audit finding:

A system clinicians work around is already unsafe.

Not insecure. Unsafe.

Workarounds are evidence of design failure, not user failure. They are leading indicators of where security is misaligned with care. They tell you exactly where friction is accumulating and where risk is being displaced rather than reduced.

Treating workarounds as policy violations misses their diagnostic value. They are telling you exactly where the system cannot carry the load you have placed on it.

When researchers studied clinicians doing their work, they found that “workarounds to cyber security are the norm, rather than the exception.” Clinicians acknowledge that effective security controls are important, especially in an essential service like healthcare. Unfortunately, all too often with these tools, clinicians cannot do their job. The medical mission trumps the security mission.

These are not terrorists or black hat hackers. These are clinicians trying to use the computer system for conventional healthcare activities. Mostly, the idea is that computer and security experts rarely happen to also be clinical care experts.

SIGNAL exists to surface this truth. To treat friction as data. To instrument the gap between how systems are supposed to work and how they actually work when people’s lives are on the line.

Redefining the CISO Role Again

If the CISO is still operating as a perimeter guard, they are guarding the wrong thing.

The job is no longer to keep threats out at all costs. The job is to ensure that care can continue safely even when things go wrong. That requires abandoning metaphors that treat hospitals like vaults and embracing models that treat them like living systems.

The CISO role has evolved significantly since 1995. By 2000, the CISO’s responsibilities extended beyond corporate boundaries to include e-business partnerships, mirroring institutional changes. The role changed to focus on enterprise risk, governing, privacy, board-level engagement, and business needs.

Steven Katz stated that the role is about business risk and cybersecurity is a way to assess business risk, “not an end in itself.” Key skills became organizational leadership, strategic thinking, communication with boards, budget management, vendor relations, business processes, regulatory overview, and the ability to merge security outcomes with business needs.

In healthcare, this evolution must go further.

The healthcare CISO must understand that clinical workflow is not a constraint to work around. It is the thing being protected. Security controls must be evaluated not just for strength, but for survivability under the conditions where they will actually be used: understaffed emergency departments, shift changes, system degradation, crisis scenarios.

The CISO myth persists because it is familiar and legible to boards. But familiarity is not fitness. Legibility is not safety.

Healthcare does not need better walls. It needs systems that bend without breaking, controls that degrade gracefully, and security leaders who understand that friction in care pathways is not a nuisance. It is a warning.

The perimeter guard was never the right archetype.

The Provocation

Financial services optimized security for an environment where minutes of delay might mean lost revenue. Healthcare operates in an environment where minutes of delay can mean death.

The research is unambiguous. 73% of healthcare professionals violate security policies not out of malice but out of necessity. 45 minutes per shift consumed by authentication overhead. 1.5 hours per day spent logging into systems with aggressive timeouts. Workarounds documented as the norm across every healthcare setting studied.

These are not implementation failures. These are design failures.

Security models built for static assets, predictable workflows, and tolerable latency will always fail in environments characterized by movement, improvisation, and urgency.

The CISO who continues to optimize for perimeter defense in a clinical world is solving the wrong problem. The walls are strong, but the patients are dying inside them because care cannot flow through the checkpoints fast enough.

Healthcare security leaders must accept that their role is fundamentally different from their counterparts in finance. The worst-case scenario is not a data breach. It is a patient dying because the security architecture made care impossible to deliver.

Zero trust can work in healthcare, but only when implemented with clinical zero trust principles: protecting workflows, not just data. Maintaining care delivery under stress, not just preventing unauthorized access. Treating clinicians as the adaptive components that keep the system functioning, not as the security vulnerabilities to be constrained.

A system that clinicians must fight to use is not secure. It is unsafe. The workarounds prove it. The latency proves it. The burnout proves it. The deaths prove it.

The only question is whether CISOs will recognize that guarding the perimeter is not the same as protecting care.

The choice is binary.

* a deck of this article is available for paid subscribers:

User's avatar

Continue reading this post for free, courtesy of Rachel Maron.

Or purchase a paid subscription.
© 2026 Rachel Maron · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture