<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Founders @ We're Trustable - AI, BPO, CX, and Trust]]></title><description><![CDATA[Leveraging decades of experience, exploring where customer experience meets trust and safety—driving engagement, efficiency, and smart outsourcing decisions.]]></description><link>https://www.trustable.blog</link><generator>Substack</generator><lastBuildDate>Thu, 14 May 2026 10:29:04 GMT</lastBuildDate><atom:link href="https://www.trustable.blog/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Rachel Maron]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[rpmconsulting@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[rpmconsulting@substack.com]]></itunes:email><itunes:name><![CDATA[Rachel Maron]]></itunes:name></itunes:owner><itunes:author><![CDATA[Rachel Maron]]></itunes:author><googleplay:owner><![CDATA[rpmconsulting@substack.com]]></googleplay:owner><googleplay:email><![CDATA[rpmconsulting@substack.com]]></googleplay:email><googleplay:author><![CDATA[Rachel Maron]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Anthropic Mythos and the End of Temporal Arbitrage]]></title><description><![CDATA[How Automated Vulnerability Discovery Collapses The Deferred-Risk Model Behind Venture Software]]></description><link>https://www.trustable.blog/p/anthropic-mythos-and-the-end-of-temporal</link><guid isPermaLink="false">https://www.trustable.blog/p/anthropic-mythos-and-the-end-of-temporal</guid><dc:creator><![CDATA[Sabino Marquez]]></dc:creator><pubDate>Fri, 24 Apr 2026 19:05:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!OuiX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5b78de2-5a06-4058-acdb-48c00b0b4377_1024x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OuiX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5b78de2-5a06-4058-acdb-48c00b0b4377_1024x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OuiX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5b78de2-5a06-4058-acdb-48c00b0b4377_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!OuiX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5b78de2-5a06-4058-acdb-48c00b0b4377_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!OuiX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5b78de2-5a06-4058-acdb-48c00b0b4377_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!OuiX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5b78de2-5a06-4058-acdb-48c00b0b4377_1024x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OuiX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5b78de2-5a06-4058-acdb-48c00b0b4377_1024x1536.png" width="1024" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f5b78de2-5a06-4058-acdb-48c00b0b4377_1024x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OuiX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5b78de2-5a06-4058-acdb-48c00b0b4377_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!OuiX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5b78de2-5a06-4058-acdb-48c00b0b4377_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!OuiX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5b78de2-5a06-4058-acdb-48c00b0b4377_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!OuiX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5b78de2-5a06-4058-acdb-48c00b0b4377_1024x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="pullquote"><p><em>Note: Many of my colleagues have sent me links to the Anthropic Mythos story, all of them asking me for guidance on how to think about this development. This story is not a technical story, but one of deferred incentives. It is through that lens that I run the following analysis</em></p></div><p>For three decades, venture-backed software investors and operators operated under a stable assumption: vulnerability discovery was scarce, uneven, and expensive. This assumption shaped everything. Backlogs could grow without immediate consequence. Security could be deferred behind roadmap pressure. Quality could be subordinated to velocity because exposure was probabilistic and delayed. Anthropic&#8217;s Mythos has invalidated these core assumptions in ways that materially impact capital allocation. Unlike prior generation tools that required integration, configuration, and human triage to function at scale, Mythos operates as an autonomous agent across heterogeneous estates without those constraints. Automated vulnerability discovery systems like Mythos collapse the cost and time required to locate exploitable software defects across fleets and estates. They do not create new classes of defects but instead remove the scarcity constraint that previously limited how quickly existing failures were found. The effect is structural and cross-domain. Latent security debt is no longer latent: it is continuously discoverable.</p><p>This fundamentally changes the economic character of software as a class. Unresolved vulnerabilities no longer behave like inventory. Instead, they behave like unbooked liabilities where each deferred patch carries a time-bound exposure whose probability of discovery is increasing and whose time to exploitation is compressing. The security backlog, long a neutral queue of work with no user persona to build for, became a ledger of accepted risk under a discovery regime that no longer supports deferral. This is the surface description of the challenge, but it is not the underlying mechanism which is best described as <em>temporal arbitrage as a capital strategy.</em></p><p>Venture-backed software was financed under an assumption that <em>time itself</em> could be exploited. The organization ships before it knows, accumulates defects as a byproduct of meeting capital timelines, converts early adoption into valuation, and exits or recapitalizes before the accumulated consequences fully materialize. The codebase, the company, and even the users may be treated as disposable. Strategically, the firm is treated as a <em>vehicle</em> for value rather than a compounding asset, where ultimate outcomes are expected to be realized through timed capital events. In this model, security, resilience, and correctness are deferrable costs; the system prices them into the future and assumes that future can be outrun.</p><p>This is the equilibrium that has held in the venture software business since 1995. Automated vulnerability discovery and exploitation breaks that equilibrium completely. When vulnerability discovery becomes dense, systematic, and inexpensive, the future is no longer distant; the accumulated consequences embedded in the codebase are pulled forward. The time window between creation and exposure compresses and the organization is forced to confront the totality of its deferred decisions within <em>operating time</em>, not exit time. Overnight, the security backlog stops behaving like inventory and begins behaving like a<em>n unplanned matured obligation</em>.</p><p>The obvious response to this observation has been to say, &#8220;the same automation that accelerates discovery will also accelerate remediation. Detection systems will find the defects. Patching systems will fix them. Static analysis, dynamic analysis, code generation, dependency intelligence, and automated repair will scale together. The system, in this account, will preserve equilibrium through reciprocal automation.&#8221; That response feels natural, but it fundamentally misunderstands the asymmetry in cost and motion. <em>Detection</em> is a search problem that benefits directly from scale, parallelism, and pattern recognition. Once the cost of search collapses, discovery density increases across the entire surface. <em>Remediation</em> is a <strong>transformation problem</strong>. It requires understanding intent, dependency interaction, state behavior, regression risk, business impact, and release safety.</p><p>AI-driven automation can reduce portions of the remediation duty but it cannot erase the coordination cost, validation burden, or system-risk burden attached to change. Faster discovery of security defects does not produce proportional remediation capacity: it produces <em>intake pressure</em>. When both sides automate, the system accelerates: discovery improves, exploitation improves, patch generation improves, and exploit adaptation improves. The temporal cycle that drives capital strategy compresses. The organization is no longer managing a software vulnerability program, but instead finds itself inside a machine-speed contest over the integrity of its own artifact. This is now the baseline condition of software companies, writ large; it is the new condition.</p><p>This is not the first time the world has been confronted with an all-hands-on-deck critical software defect. For those of us that worked through Y2K, we can understand this at the level of forced remediation; however, that understanding fails at the level of structure because Y2K work was bounded by a known defect class and a fixed date. The current condition of infinite discovery and exploitation at negligible cost has neither boundary nor termination: every codebase, every dependency tree, every internal tool, every acquired system, and every integration point exists inside a persistent discovery field. This global persistence removes the go-to capital strategy: <em>escape through delay</em>.</p><p>It also removes the possibility of escape through the fantasy of later automation. The claim that future patching systems will absorb present negligence is the old temporal arbitrage restated in technical form; it assumes that consequence can still be pushed forward because a later capability will carry it. That assumption has <em>already</em> failed once, with the implications extending far beyond engineering. A company whose software cannot withstand systematic automated inspection by hostile AI is not merely insecure: it is <em><strong>mispriced</strong></em>. Its valuation assumed that certain classes of risk would remain undiscovered long enough to be irrelevant to capital realization. That assumption is no longer defensible. Value, in this context, shifts location.</p><p>Under temporal arbitrage strategies, value was located in <em>motion</em>: user growth, revenue acceleration, narrative dominance, and the ability to reach successive funding events before operational contradiction can condense. Under a continuous defect discovery paradigm, value relocates to the artifact itself: the quality, resilience, and trustworthiness of the system under inspection. This ontological shift redefines what it means for a software company to hold value. Software now mediates health systems, financial systems, logistics, communications, governance, and social coordination. It functions as infrastructure even when the company that produced it did not intend to become infrastructure. People depend on software with the quiet expectation that it will safely hold their value object. That dependence is not erased because the software was financed as a temporary capital vehicle.</p><p>An operating model that introduces defects freely and trusts future machines to repair them is incompatible with that role; thus, the operating model must evolve accordingly. The relevant question is no longer whether vulnerabilities exist. They do. The question is whether the organization can operate under conditions where vulnerabilities are continuously surfaced by AI and must be resolved within a compressed time horizon. This introduces a new control variable for value defense: <em>remediation velocity</em>. Remediation velocity is a measure of whether the organization can absorb and neutralize a continuous stream of adversarially discovered security defects without destabilizing widget delivery. Most organizations are not structured for this condition. They are structured for episodic audits, periodic testing, and reactive incident response. Those operating models assume defect discovery scarcity, an assumption which no longer holds.</p><p>As security defect discovery becomes dense and systematic, several shifts occur. First, exposure becomes attributable. A deferred vulnerability is a recorded decision to carry known risk into an environment where discovery is expected. Second, perimeter distinctions erode. Internal systems, staging environments, legacy services, and vendor integrations are subject to the same discovery dynamics as externally facing infrastructure. The classification of &#8220;non-critical&#8221; becomes unstable once lateral movement and chaining are trivial. Third, standards of care move. As automated discovery becomes widely available, the definition of <em>reasonably knowable risk</em> expands. Liability frameworks and insurance models will adjust accordingly. Assertions of posture will carry less weight than demonstrated and evidenceable response performance.</p><p>These shifts converge on a single condition: the firm&#8217;s software fleet is now a capital liability surface under continuous adversarial inspection. This is not a <em>security problem</em> to be solved. It is a <em>balance sheet correction</em> to be absorbed. The corrective is not another tool layer or stack, but an operating model that treats trust, quality, resilience, and security as capital value conditions rather than downstream operational controls. That model must connect software production to value defense, capital timing, buyer confidence, diligence readiness, insurance exposure, and executive accountability.</p><p>This requires a transition from backlog management to <em>Exposure Management</em>. Exposure Management requires measurable control over three dimensions: coverage, latency, and dependency governance.</p><ul><li><p><em>Coverage</em> defines the proportion of the software estate subject to continuous and instrumented analysis.</p></li><li><p><em>Latency</em> defines the time between identification and remediation.</p></li><li><p><em>Dependency Governance </em>defines the control of external code ingestion and the minimization of unnecessary surface area.</p></li></ul><p>Security defect discovery must be treated as a <em>production input</em> because the organization must be able to ingest security findings at scale, triage them based on exploitability and impact more quickly than an adversarial AI can, and execute remediation as a continuous function on a living codebase. But this is only the minimum requirement. The deeper requirement is <em>trust value management</em>. A software company operating under this condition must know where trust value is created, where trust value is consumed, where trust value is degraded, and where trust debt is accumulating inside the stakeholder value journey. It must be able to translate software quality into financial exposure, customer confidence, sales velocity, diligence defensibility, and valuation protection. It must be able to keep the velocity thesis alive without depending on ignorance, deferral, or delayed consequence.</p><p>That is the proper strategic correction to the capital operating model, not a &#8220;compliance project&#8221; or a &#8220;risk management initiative.&#8221; An operating model capable of producing trust at machine speed requires that trust itself be treated as a manufactured, measurable asset; one with production inputs, quality controls, and verifiable outputs. Ultimately, the answer to the automated vulnerability discovery panic that Mythos sparked is an <strong>operating model that makes trustworthy software production a governed value system</strong>. For decades, the industry converted future consequences into present value by assuming that those future consequences could be outrun. Automated security defect discovery removes that assumption, and automated patching does not restore it but instead accelerates the field in which the original debt must now be paid. The security backlog is a liability exposure being systematically searched by value-eroding adversaries. The timeline for temporal arbitrage has been rendered brittle and short horizon. The dominant question is no longer whether the organization contains defects. It does. The dominant question is whether the organization has an <em>operating model capable of producing trust at the speed of automated adversarial discovery.</em></p><h4>Addenda:</h4><p>Automated security defect discovery changes the economic character of software by converting latent security debt into continuously discoverable exposure.</p><ul><li><p>If this analysis is wrong, the evidence will appear as stable remediation ratios, unchanged underwriting assumptions, unchanged diligence practices, and no measurable increase in defect-driven pressure from customers, insurers, acquirers, or regulators.</p></li><li><p>If this analysis is correct, the industry response will be narrative dilution, absorbing this argument into tooling strategies and existing compliance categories without adopting the operating model change it requires. That response is materially insufficient: rebranding the security stack is not the same as evolving the capital operating model to meet the moment.</p></li></ul>]]></content:encoded></item><item><title><![CDATA[When Government Abdicates: A Complete Response to the White House National Policy Framework for AI]]></title><description><![CDATA[The White House National Policy Framework for Artificial Intelligence reveals a government that has fundamentally misunderstood the AI problem.]]></description><link>https://www.trustable.blog/p/when-government-abdicates-a-complete</link><guid isPermaLink="false">https://www.trustable.blog/p/when-government-abdicates-a-complete</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Mon, 23 Mar 2026 16:48:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ys7I!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff945025d-7e04-4d2b-9167-83917b4c727f_1002x875.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/when-government-abdicates-a-complete?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/when-government-abdicates-a-complete?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p></p><p><strong>Trustable Policy Response | March 2026</strong></p><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail-default" src="https://substackcdn.com/image/fetch/$s_!0Cy0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Fattachment_icon.svg"></image><div class="file-embed-details"><div class="file-embed-details-h1">A National Policy Framework for Artificial Intelligence</div><div class="file-embed-details-h2">234KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.trustable.blog/api/v1/file/b9aa0519-058e-4e3f-8ff1-b65fb3fe0ef1.pdf"><span class="file-embed-button-text">Download</span></a></div><div class="file-embed-description">4 WHOLE Pages of nothing. Please, download, and read, it should take you all of 3 minutes.</div><a class="file-embed-button narrow" href="https://www.trustable.blog/api/v1/file/b9aa0519-058e-4e3f-8ff1-b65fb3fe0ef1.pdf"><span class="file-embed-button-text">Download</span></a></div></div><p></p><h2>I. The Framework&#8217;s Foundational Failure</h2><p>The White House National Policy Framework for Artificial Intelligence reveals a government that has fundamentally misunderstood the AI problem. The framework treats AI safety as a matter of removing regulatory barriers and trusting industry self-certification. It explicitly prohibits creation of new federal verification infrastructure while preempting states from building it themselves. The result is not innovation enablement. This is the complete abandonment of human&#8212;AMERICAN&#8212;safety verification as a function.</p><p>This is an empirical observation about what happens when verification mechanisms are designed for deployment enablement rather than danger detection.</p><p>The framework proposes to:</p><ul><li><p>Prevent creation of new federal AI oversight (&#8221;no new federal rulemaking body&#8221;)</p></li><li><p>Preempt state AI development regulation (&#8221;inherently interstate phenomenon&#8221;)</p></li><li><p>Rely on &#8220;industry-led standards&#8221; through existing regulatory bodies</p></li><li><p>Create &#8220;minimally burdensome&#8221; national standards</p></li><li><p>Establish regulatory sandboxes to accelerate deployment</p></li></ul><p>What the framework fails to provide:</p><ul><li><p>Any mechanism to verify that AI systems are actually safe</p></li><li><p>Any requirement for adversarial testing before deployment</p></li><li><p>Any continuous monitoring as systems evolve</p></li><li><p>Any independent audit of industry claims</p></li><li><p>Any enforcement beyond after-the-fact legal liability</p></li></ul><p>The absence is does not seem accidental. It looks quite systematic. Every single section of the framework, such as it is, optimizes for passage; enabling AI systems to move from development to deployment unburdened by the inconvenience of safety. The question &#8220;Can stakeholders safely entrust their value to this system?&#8221; is never asked because the framework is not designed to answer it.</p><p>What follows is a section-by-section analysis of what the framework proposes, why it fails, and what verification infrastructure must exist instead.</p><h2>II. Section-by-Section Gap Analysis with Trustable Answers</h2><h3><strong>I. Protecting Children and Empowering Parents</strong></h3><h4><strong>What the Framework Proposes</strong></h4><p>The White House calls for age-assurance requirements, features reducing exploitation risks, and applying existing privacy protections to AI systems. It explicitly instructs Congress to &#8220;avoid setting ambiguous standards about permissible content, or open-ended liability.&#8221;</p><h4><strong>Why This Fails</strong></h4><p>This section treats child protection as a matter of implementing features and documenting compliance. Organizations will add age-verification gates. They will create safety features. They will produce documentation showing adherence to child privacy laws. Systems will deploy. Children will be harmed.</p><p>The failure occurs because the framework contains no mechanism to verify that protective features actually work under operational conditions. Age-assurance can be circumvented. Safety features can fail. Privacy protections will be violated. </p><p>Without adversarial testing designed to discover these failure modes BEFORE deployment, child protection becomes documentation theater.</p><p>The instruction to avoid &#8220;ambiguous standards&#8221; and &#8220;open-ended liability&#8221; reveals the underlying priority: protecting AI developers from legal risk rather than protecting children from AI systems.</p><h4><strong>The Trustable Answer</strong></h4><p><strong>Requirement:</strong> AI systems claiming child safety must demonstrate through adversarial testing that protections cannot be circumvented.</p><p><strong>Implementation:</strong></p><ul><li><p><strong>Adversarial verification mandate:</strong> Before deployment to minor-accessible platforms, systems must undergo hostile testing where red teams attempt to bypass age verification, defeat safety features, and access protected data</p></li><li><p><strong>Continuous monitoring requirement:</strong> Child safety features must be re-verified every 90 days as systems retrain and evolve</p></li><li><p><strong>Independent audit:</strong> Insurance-backed verification by entities whose economic survival does not depend on approving systems for deployment</p></li><li><p><strong>Proof-based deployment:</strong> No authorization for minor-accessible deployment without registry-verified child safety proofs</p></li><li><p><strong>Automatic revocation:</strong> Systems that fail verification or undergo material changes without re-verification lose deployment authorization immediately</p></li></ul><p><strong>Why this works:</strong> It shifts the question from &#8220;Did you implement safety features?&#8221; to &#8220;Do your safety features withstand adversarial attack under operational conditions?&#8221; The former is a documentation exercise. The latter is an engineering requirement.</p><p><strong>Insurance enforcement mechanism:</strong> Platforms deploying AI to minors should not be able to obtain liability coverage without registry-verified child safety proofs. Underwriters cannot price unmeasurable risk. When the first major child exploitation incident occurs through a &#8220;compliant&#8221; AI system, insurance markets will demand adversarial verification. The only question is whether verification infrastructure exists before or after that first incident, and then, of course, how many accumulated incidences must occur before anything at all is done.</p><h3><strong>II. Safeguarding and Strengthening American Communities</strong></h3><h4><strong>What the Framework Proposes</strong></h4><p>Protection from increased electricity costs, streamlined permitting for AI infrastructure, augmented law enforcement against AI-enabled fraud, national security assessment of frontier AI capabilities, and resources for small business AI adoption.</p><h4><strong>Why This Fails</strong></h4><p>Every item in this section assumes that compliance with stated objectives equals actual safety, it absolutely does not. Law enforcement will receive resources to combat AI fraud. National security agencies will assess frontier AI capabilities. Small businesses will receive AI tools. None of these activities include verification that the systems actually work safely.</p><p>Consider the national security assessment requirement. Agencies will &#8220;possess sufficient technical capacity to understand frontier AI model capabilities and any associated national security considerations.&#8221; This presumes agencies can reliably assess capabilities from vendor-provided information. They cannot. Frontier AI systems evolve weekly. Capabilities emerge unpredictably. (<a href="https://www.trustable.blog/p/the-trojan-trust-problem-why-ais">OWLS ANYONE</a>&#8253;&#8253;) By the time an assessment concludes, the system being assessed has changed materially.</p><p>The framework treats assessment as a one-time compliance action rather than a continuous verification process.</p><h4><strong>The Trustable Answer</strong></h4><p><strong>Requirement:</strong> AI systems deployed in critical infrastructure must produce continuous, adversarially-tested proofs of safety boundaries and capability limits.</p><p><strong>Implementation:</strong></p><ul><li><p><strong>Fraud prevention verification:</strong> Systems claiming to detect AI-enabled fraud must demonstrate effectiveness through red-team exercises where attackers attempt to execute fraud that systems should prevent</p></li><li><p><strong>Continuous capability monitoring:</strong> Frontier AI systems must generate verifiable capability assessments every 30 days through standardized testing protocols that detect emergent capabilities</p></li><li><p><strong>Small business AI verification:</strong> AI tools provided to small businesses must meet the same adversarial testing requirements as enterprise systems&#8212;the size of the deploying organization does not reduce stakeholder risk</p></li><li><p><strong>Infrastructure deployment gates:</strong> Critical infrastructure AI (power grid management, financial systems, transportation) cannot deploy without insurance backed by verified safety proofs</p></li><li><p><strong>Real-time capability alerts:</strong> When frontier AI systems demonstrate capabilities outside previously verified boundaries, registry status changes automatically and dependent systems receive alerts</p></li></ul><p><strong>Why this works:</strong> National security agencies lack capacity to conduct continuous technical assessment. Registry infrastructure provides that capacity through distributed, insurance-backed verification. When a frontier AI system demonstrates dangerous capability, the question is not &#8220;Did the vendor tell us about this?&#8221; but &#8220;Did the system pass adversarial capability testing this month?&#8221;</p><p><strong>Market enforcement:</strong> Critical infrastructure operators cannot obtain insurance coverage for unverified AI deployment. When the first AI-caused infrastructure failure occurs, liability will be enormous. Insurers will demand proof that systems were verified. Operators will demand registry infrastructure that makes verification possible. Again: the question is whether this infrastructure exists before the catastrophic failure or after.</p><h3><strong>III. Respecting Intellectual Property Rights and Supporting Creators</strong></h3><h4><strong>What the Framework Proposes</strong></h4><p>Let courts resolve training/copyright questions, consider collective licensing frameworks (but don&#8217;t mandate when licensing is required), establish framework for unauthorized AI replicas, monitor copyright developments.</p><h4><strong>Why This Fails</strong></h4><p>This section explicitly delegates safety determination to post-harm litigation. &#8220;Let courts resolve&#8221; means creators are harmed first, seek redress second. By the time courts establish that training violated copyright, millions of creators have been harmed and AI systems trained on their work are embedded in commercial infrastructure.</p><p>The framework treats intellectual property protection as a matter to be resolved through legal process rather than prevented through verification infrastructure. This is structural abdication. Courts can determine liability after harm occurs. They cannot prevent harm before it occurs. Prevention requires verification that systems respect IP boundaries before deployment.</p><h4><strong>The Trustable Answer</strong></h4><p><strong>Requirement:</strong> AI systems must produce cryptographically verifiable evidence of training data provenance and licensing status before commercial deployment.</p><p><strong>Implementation:</strong></p><ul><li><p><strong>Data provenance verification:</strong> Systems must generate auditable records showing source, licensing status, and opt-out compliance for all training data</p></li><li><p><strong>Continuous IP compliance monitoring:</strong> Systems must demonstrate through ongoing testing that outputs don&#8217;t reproduce copyrighted material beyond fair use thresholds</p></li><li><p><strong>Independent audit of opt-out mechanisms:</strong> Third-party verification that robots.txt files, opt-out registries, and licensing restrictions are actually honored in training pipelines</p></li><li><p><strong>Pre-deployment proof requirement:</strong> No commercial deployment without verified data provenance demonstrating lawful training</p></li><li><p><strong>Automatic revocation for IP violations:</strong> Systems discovered violating IP protections lose registry standing; dependent systems receive immediate alerts</p></li></ul><p><strong>Why this works:</strong> Courts provide remedy after harm. Verification prevents harm before deployment. The two mechanisms serve different functions. The framework recognizes only one.</p><p><strong>Creator protection enforcement:</strong> When creators sue for copyright infringement, defendants will claim fair use, good faith reliance on industry standards, and compliance with existing frameworks. Courts will take years to resolve these questions. Meanwhile, AI systems continue operating. Registry infrastructure shifts the burden: systems must prove lawful training before deployment, not defend against infringement claims after deployment. This is the difference between prevention and remedy.</p><h3><strong>IV. Preventing Censorship and Protecting Free Speech</strong></h3><h4><strong>What the Framework Proposes</strong></h4><p>Prevent government from coercing AI providers to ban/alter content based on partisan agendas, provide redress for government censorship efforts.</p><h4><strong>Why This Fails</strong></h4><p>The framework addresses government censorship while ignoring that AI systems themselves function as content curation infrastructure. When an AI system systematically filters certain political viewpoints through opaque algorithmic decisions (GROK&#8253;), the speech restriction is just as effective as if government had mandated it. The framework prevents the visible threat (government coercion) while ignoring the operational threat (algorithmic filtering).</p><p>This is not theoretical, AI systems today make millions of content moderation decisions. These decisions are made through algorithms that are not transparent, not auditable, and not subject to verification. Whether those algorithms systematically disadvantage viewpoints cannot be determined without independent testing.</p><h4><strong>The Trustable Answer</strong></h4><p><strong>Requirement:</strong> AI systems making content moderation decisions must demonstrate through adversarial testing that filtering does not systematically disadvantage political viewpoints.</p><p><strong>Implementation:</strong></p><ul><li><p><strong>Algorithmic bias verification:</strong> Systems must undergo testing where adversaries submit ideologically diverse content and demonstrate that moderation decisions are not systematically biased</p></li><li><p><strong>Transparency requirement:</strong> Content moderation decisions must be auditable by independent third parties with access to sufficient data to detect systemic patterns</p></li><li><p><strong>Continuous monitoring:</strong> Systems must prove through ongoing testing that bias does not emerge as models retrain on user feedback and content trends</p></li><li><p><strong>Independent verification:</strong> Insurance-backed audit that free speech protections work under operational conditions across political spectrum</p></li><li><p><strong>Explainability requirement:</strong> When content is filtered, systems must provide specific, auditable justification tied to terms of service violations rather than opaque algorithmic determinations</p></li></ul><p><strong>Why this works:</strong> Government censorship is visible and can be challenged through legal process. Algorithmic censorship is opaque and operates at scale before patterns become visible. Verification infrastructure makes algorithmic decisions auditable, enabling detection of systematic bias before it becomes embedded in public discourse infrastructure.</p><p><strong>First Amendment enforcement:</strong> When AI systems filter political speech, plaintiffs must prove systematic bias through discovery of internal algorithms and decision data. Companies resist disclosure as proprietary. Litigation takes years. Registry infrastructure makes bias testing a deployment requirement rather than a discovery battle. Systems prove evenhandedness before deployment rather than defend against bias claims after deployment.</p><h3><strong>V. Enabling Innovation and Ensuring American AI Dominance</strong></h3><h4><strong>What the Framework Proposes</strong></h4><p>Regulatory sandboxes, accessible federal datasets in AI-ready formats, <strong>no new federal rulemaking body to regulate AI</strong>, support through existing regulatory bodies and <strong>industry-led standards</strong>.</p><h4><strong>Why This Fails</strong></h4><p>This section does not merely fail to provide verification infrastructure. It explicitly prohibits it.</p><p>&#8220;Congress should not create any new federal rulemaking body to regulate AI&#8221; is architectural abdication. Existing regulatory bodies (FDA, SEC, FAA, FTC) lack technical capacity to verify AI safety in their domains. They depend on industry self-certification. The framework instructs them to continue depending on industry self-certification while prohibiting creation of independent verification infrastructure.</p><p>&#8220;Industry-led standards&#8221; means vendors define what counts as sufficient safety verification. This is predictable regulatory capture. Organizations do not fund standards development that prevents their systems from deploying. Standards bodies that consistently produce disqualifying findings do not receive industry support. Market selection optimizes standards for permissiveness.</p><p>The framework treats this as innovation enablement. It is 100% verification abandonment.</p><h4><strong>The Trustable Answer</strong></h4><p><strong>Requirement:</strong> Industry-led standards must include independently verifiable proof requirements, not just process guidelines. Existing regulatory bodies must be supported with verification infrastructure they currently lack.</p><p><strong>Implementation:</strong></p><ul><li><p><strong>Proof requirement layer:</strong> Industry standards (ISO, NIST, sector-specific frameworks) must specify what constitutes sufficient evidence that systems meet safety requirements, not just what processes should be followed</p></li><li><p><strong>Independent verification infrastructure:</strong> Create insurance-backed verification bodies that test systems against industry standards through adversarial methodology</p></li><li><p><strong>Existing regulator support:</strong> Provide SEC, FDA, FAA, FTC with registry access showing which AI systems have verified proofs for their domains&#8212;regulators monitor compliance rather than conducting technical verification themselves</p></li><li><p><strong>Sandbox verification:</strong> Even experimental deployments must demonstrate safety through adversarial testing before human value is put at risk</p></li><li><p><strong>Continuous verification requirement:</strong> Systems that pass initial verification must maintain proof renewal as they evolve&#8212;verification is not one-time certification</p></li></ul><p><strong>Why this works:</strong> The framework correctly identifies that creating new federal AI regulators duplicates expertise that exists in sector-specific agencies. The error is assuming those agencies have verification capacity they do not possess. Registry infrastructure provides the verification layer that existing regulators can leverage without requiring them to develop AI-specific technical expertise.</p><p><strong>The critical distinction:</strong> The framework prohibits &#8220;new federal rulemaking body <strong>to regulate</strong> AI.&#8221; Registry infrastructure does not regulate AI development. It verifies AI safety. These are different functions. Regulation sets rules about what AI can do. Verification determines whether deployed AI systems actually behave safely. Existing regulators set rules. Registry infrastructure provides verification.</p><p><strong>Market inevitability:</strong> Insurance companies will create this infrastructure regardless of government action. When the first major AI-caused catastrophe occurs in a regulated domain&#8212;financial fraud at scale, medical AI misdiagnosis causing deaths, autonomous system failure causing mass casualties&#8212;liability will be enormous. Insurers will refuse coverage for unverified systems. Enterprises will demand verification infrastructure. The only question is whether that infrastructure is built proactively or reactively.</p><p>The framework optimizes for reactive infrastructure built after catastrophic failure. This is a choice, not an inevitability.</p><h3><strong>VI. Educating Americans and Developing an AI-Ready Workforce</strong></h3><h4><strong>What the Framework Proposes</strong></h4><p>Incorporate AI training into existing education programs, study workforce realignment, support land-grant institutions for AI youth development.</p><h4><strong>Why This Fails</strong></h4><p>The framework deploys AI into schools and workforce training programs without verification requirements. Educational AI will make decisions about student capabilities, career recommendations, and learning pathways. Workforce training AI will determine job readiness and skill development. These systems will be deployed based on vendor claims of effectiveness, not verified evidence of safety.</p><p>When educational AI systematically biases student outcomes&#8212;disadvantaging certain demographics, incorrectly assessing capabilities, directing students away from opportunities they could succeed in&#8212;the harm is not immediately visible, and potentially GENERATIONAL. Students don&#8217;t know what opportunities they were denied. Bias in educational AI compounds over years as it shapes academic trajectories.</p><p>The framework contains no mechanism to detect this bias before it causes harm.</p><h4><strong>The Trustable Answer</strong></h4><p><strong>Requirement:</strong> AI systems deployed in educational settings must demonstrate through adversarial testing that they do not systematically bias student outcomes.</p><p><strong>Implementation:</strong></p><ul><li><p><strong>Educational AI verification:</strong> Systems deployed in schools must undergo testing where adversaries attempt to demonstrate systematic bias in assessments, recommendations, and opportunity allocation</p></li><li><p><strong>Student data protection verification:</strong> Educational AI must prove through independent audit that student data is protected, not used for model training without consent, and not shared with third parties</p></li><li><p><strong>Continuous monitoring:</strong> Educational AI must be re-verified every 180 days as systems evolve and student populations change</p></li><li><p><strong>Workforce training verification:</strong> AI tools used in job training must demonstrate through testing that they don&#8217;t systematically disadvantage vulnerable populations</p></li><li><p><strong>Explainability requirement:</strong> Educational and workforce AI must provide specific, auditable explanations for decisions affecting student/worker opportunities</p></li></ul><p><strong>Why this works:</strong> Educational AI operates on vulnerable populations (students, workers in transition) who lack power to challenge systemic bias. Verification before deployment shifts the burden from students proving they were harmed to systems proving they don&#8217;t cause harm.</p><p><strong>Enforcement through institutional liability:</strong> Schools and training programs deploying unverified AI face liability when bias is eventually discovered. Insurance for educational AI deployment will require verification. Institutions will demand registry infrastructure that makes verification possible. The alternative is accepting liability for unknown bias in systems they cannot audit.</p><h3><strong>VII. Establishing a Federal Policy Framework, Preempting Cumbersome State AI Laws</strong></h3><h4><strong>What the Framework Proposes</strong></h4><p>Preempt state AI laws imposing &#8220;undue burdens,&#8221; create &#8220;minimally burdensome national standard,&#8221; prevent states from regulating AI development (&#8221;inherently interstate phenomenon&#8221;), prevent states from burdening lawful AI use, prevent states from penalizing AI developers for third-party unlawful conduct.</p><h4><strong>Why This Fails</strong></h4><p>This section creates a verification vacuum by design.</p><p>The framework prohibits states from regulating AI development while refusing to create federal verification infrastructure. The result is that <strong>no jurisdiction</strong> can require safety verification:</p><ul><li><p>States: Cannot regulate AI development (preempted as &#8220;interstate phenomenon&#8221;)</p></li><li><p>Federal: Will not create verification infrastructure (&#8221;no new federal rulemaking body&#8221;)</p></li><li><p>Industry: Self-certifies through &#8220;industry-led standards&#8221;</p></li></ul><p>This is not governance. This is architectural capture&#8212;the framework prevents safety infrastructure from being built at any jurisdictional level.</p><p>The framework treats state AI regulation as &#8220;cumbersome burden&#8221; rather than legitimate exercise of police powers to protect citizens. It preempts state action while providing no federal substitute. The &#8220;minimally burdensome national standard&#8221; turns out to be no verification requirement at all.</p><h4><strong>The Trustable Answer</strong></h4><p><strong>Requirement:</strong> Federal verification standards must be <strong>stronger</strong> than state alternatives, not weaker. Preemption should prevent fragmentation, not prevent verification.</p><p><strong>Implementation:</strong></p><ul><li><p><strong>Federal verification floor:</strong> Establish registry infrastructure as national verification standard that preempts weaker state requirements while allowing states to mandate registry verification in their jurisdictions</p></li><li><p><strong>Interstate coordination:</strong> Registry provides consistent verification across jurisdictions without preventing state enforcement&#8212;systems verified in one state are verified nationally</p></li><li><p><strong>Traditional police powers preservation:</strong> States retain authority to require registry verification as exercise of consumer protection, fraud prevention, and child safety powers (which framework acknowledges states retain)</p></li><li><p><strong>Critical domain mandates:</strong> Federal law requires registry verification for AI deployed in domains with high public risk (healthcare, finance, education, employment, public safety)</p></li><li><p><strong>Procurement specifications:</strong> Federal and state governments require registry verification for AI systems they procure or deploy</p></li></ul><p><strong>Why this works:</strong> The framework correctly identifies that 50 different state AI regulations create compliance burden. The error is assuming the solution is no verification requirements rather than consistent national verification infrastructure. Registry provides the consistent standard the framework claims to want.</p><p><strong>Constitutional structure:</strong> The framework&#8217;s preemption argument is weak. States have traditionally regulated dangerous technologies under police powers (consumer protection, fraud prevention, child safety). The framework acknowledges states retain these powers. Registry verification falls squarely within traditional state authority to protect citizens from harm. Federal preemption that prohibits states from requiring safety verification of dangerous technologies is constitutionally dubious.</p><p>More importantly: <strong>it doesn&#8217;t matter</strong>. Insurance markets will create verification requirements regardless of whether state or federal law mandates them. The question is whether government provides infrastructure that makes verification consistent and efficient, or whether verification emerges chaotically through litigation and catastrophic failure.</p><h2>III. The Architecture That Actually Works</h2><p>The White House framework fails because it was meant to. It treats AI safety as a matter of documentation and compliance. The Trustable answer is infrastructure that makes safety verifiable, continuous, and enforceable.</p><h3><strong>The Six-Layer Verification Architecture</strong></h3><p><strong>Layer 1: AI Systems Under Verification</strong> Models, data pipelines, deployment infrastructure, operational context. The systems that will make decisions affecting human value.</p><p><strong>Layer 2: Adversarial Proof Production</strong> Systems generate evidence through hostile testing designed to discover failure modes:</p><ul><li><p>Data provenance under interrogation attempting to find unlicensed training data</p></li><li><p>Model integrity under distribution shift testing</p></li><li><p>System reliability under adversarial attack</p></li><li><p>Transparency sufficient for independent safety determination</p></li><li><p>Governance accountability with enforceable mechanisms</p></li></ul><p>This is not documentation of good processes. This is evidence harvested through adversarial testing explicitly designed to produce disqualifying findings if the system is unsafe.</p><p><strong>Layer 3: Independent Verification</strong> Third-party verification by economically independent entities&#8212;primarily insurance-backed verification bodies whose survival depends on accurate risk assessment, not client retention. These entities conduct systematic adversarial testing to validate proofs.</p><p><strong>Layer 4: Registry Recording</strong> Verified proofs recorded in independent, publicly interrogable registries. Proofs are cryptographically signed, timestamped, continuously renewable. Registry status is machine-readable, enabling automated procurement, underwriting, and compliance checking.</p><p><strong>Layer 5: Public Interrogation</strong> Regulators, insurers, enterprises, investors, and civil society can interrogate safety claims directly through registry access. This makes trust machine-readable. An enterprise considering AI procurement can verify registry status. An insurer underwriting AI deployment can query proof decay status. A regulator can monitor compliance in real time.</p><p><strong>Layer 6: Revocation and Renewal</strong> When proofs decay, systems change materially, or verification fails, registry status updates automatically. Dependent systems receive alerts. Insurance coverage may be invalidated. Procurement authorizations may be withdrawn. Regulatory compliance may lapse.</p><p>This creates enforceable accountability through economic mechanisms rather than government enforcement.</p><h3><strong>Why This Architecture Succeeds Where Government Fails</strong></h3><p><strong>Speed:</strong> Registry verification operates at the speed of AI evolution (days/weeks), not regulatory timescales (years)</p><p><strong>Scale:</strong> Distributed verification through insurance-backed entities scales better than centralized government agencies</p><p><strong>Expertise:</strong> Verification bodies develop AI-specific technical expertise that general regulatory agencies lack</p><p><strong>Independence:</strong> Insurance-backed verification avoids capture because underwriters lose money when they approve unsafe systems that cause claims</p><p><strong>Enforcement:</strong> Economic enforcement through insurance, procurement, and capital markets is faster and more certain than regulatory enforcement through litigation</p><p><strong>Adaptability:</strong> Registry infrastructure evolves as AI capabilities change; regulatory frameworks ossify</p><p><strong>International compatibility:</strong> Registry verification can operate across jurisdictions; regulatory frameworks fragment</p><h3><strong>The Five Core Requirements Implementation</strong></h3><p><strong>1. Adversarial Verification, Not Process Compliance</strong></p><ul><li><p>Testing designed to discover failure modes that would disqualify deployment</p></li><li><p>Red-team exercises attempting to cause harm through adversarial inputs</p></li><li><p>Distribution shift testing verifying robustness when operational conditions change</p></li><li><p>Supply chain interrogation detecting inherited risks from upstream dependencies</p></li></ul><p><strong>2. Continuous Verification, Not Point-in-Time Certification</strong></p><ul><li><p>Proofs decay on defined timescales (3-6 months for model reliability, 30-90 days for adversarial testing, 6-24 months for data provenance)</p></li><li><p>Systems demonstrate continuous monitoring with automated alerts when verification lapses</p></li><li><p>Registry status updates when proofs expire or systems change outside tested parameters</p></li><li><p>Verification operates at the speed of system evolution</p></li></ul><p><strong>3. Economic Independence, Not Client-Service Relationships</strong></p><ul><li><p>Verification conducted by insurance-backed entities whose economic survival depends on accurate risk assessment</p></li><li><p>Fee structures funded through industry pools or insurance mechanisms that don&#8217;t create per-client retention pressure</p></li><li><p>Verifiers can produce disqualifying findings without losing business because they&#8217;re not in client-service relationships</p></li></ul><p><strong>4. Stakeholder Value Safety, Not Organizational Process Maturity</strong></p><ul><li><p>Can individuals whose employment depends on AI trust that systems won&#8217;t systematically disadvantage them?</p></li><li><p>Can enterprises whose operations depend on AI trust that failures won&#8217;t cause catastrophic business disruption?</p></li><li><p>Can regulators whose enforcement depends on AI trust that systems will behave predictably?</p></li><li><p>Can insurers whose underwriting depends on AI trust that risks are measurable?</p></li></ul><p><strong>5. Revocation Authority, Not Aspirational Standards</strong></p><ul><li><p>Failed verification produces actionable consequences (deployment blocks, insurance invalidation, regulatory non-compliance)</p></li><li><p>Registry status changes automatically when proofs decay</p></li><li><p>Downstream systems receive alerts when upstream components lose verification</p></li><li><p>Revocation is enforceable without requiring litigation</p></li></ul><h2>IV. Why This Will Happen Regardless of Government Action</h2><p>The White House framework treats verification infrastructure as optional. It is not. Insurance markets will force its creation.</p><h3><strong>The Insurance Inevitability</strong></h3><p><strong>Current state:</strong> Underwriters cannot accurately price AI risk. Systems are opaque. Training data is unknown. Behavior is unpredictable. Liability chains are unclear. Without standardized evidence about system safety, underwriting becomes speculation.</p><p><strong>No insurance market survives on speculation.</strong></p><p><strong>What insurers will do:</strong></p><ol><li><p>Demand verification infrastructure that allows consistent risk assessment</p></li><li><p>Offer premium reductions for registry-verified systems and higher premiums for unverified systems</p></li><li><p>Create immediate market pressure through differential pricing</p></li><li><p>Eventually refuse coverage for unverified systems entirely</p></li></ol><p><strong>What this creates:</strong> Once reinsurers&#8212;who care deeply about systemic risk&#8212;begin requiring registry signals, the entire insurance market follows. When insurance becomes difficult to obtain without registry verification, unverified deployment becomes economically impossible.</p><p><strong>The timeline:</strong> This does not require government action. It requires catastrophic failure. The first major AI-caused disaster with enormous liability (medical AI causing deaths at scale, financial AI causing market collapse, autonomous systems causing mass casualties, educational AI creating systematic harm to vulnerable populations) will force insurance markets to demand verification.</p><p><strong>The question is not whether this happens. The question is whether verification infrastructure exists before the catastrophic failure or after.</strong></p><p>The White House framework optimizes for building infrastructure after catastrophic failure. This is a choice. An alternative exists.</p><h3><strong>The Enterprise Procurement Pressure</strong></h3><p><strong>Current state:</strong> Enterprises deploying AI into critical operations face unmeasurable risk. They depend on vendor claims of safety without independent verification.</p><p><strong>What enterprises will do:</strong></p><ol><li><p>Begin requiring registry verification in procurement specifications</p></li><li><p>Refuse to accept liability for unverified AI in critical operations</p></li><li><p>Demand contractual protections backed by verified safety proofs</p></li><li><p>Create market pressure through procurement requirements</p></li></ol><p><strong>What this creates:</strong> AI vendors who cannot provide registry verification lose access to enterprise customers deploying in critical domains. Market pressure forces verification even without regulatory mandates.</p><p><strong>Early adopters:</strong> Healthcare systems (liability exposure enormous), financial institutions (regulatory pressure intense), critical infrastructure operators (public safety implications), large-scale employers (discrimination liability significant).</p><h3><strong>The Regulatory Failure Exposure</strong></h3><p><strong>What the White House framework creates:</strong> A situation where verification infrastructure is available, deployed, and economically viable, but government has not mandated its use in critical domains.</p><p><strong>What this exposes:</strong> Regulatory failure becomes visible and actionable. Citizens, enterprises, insurers, and investors can point to functioning verification infrastructure and demand that government mandate its use in critical domains.</p><p>&#8220;We have the measurement tools. Use them or explain why you won&#8217;t.&#8221;</p><p><strong>The political pressure this creates:</strong> When catastrophic AI failure occurs and registry infrastructure exists that could have prevented it, government failure is not abstract. It is specific. Regulators cannot claim they lacked capacity&#8212;the capacity existed and they chose not to use it.</p><p><strong>This creates the political pressure that was previously absent.</strong> Registry infrastructure makes government failure visible, measurable, and actionable through democratic mechanisms.</p><h3><strong>The International Competitiveness Argument</strong></h3><p><strong>What happens when other jurisdictions require verification:</strong> EU AI Act creates verification requirements. China implements AI safety infrastructure. Other jurisdictions mandate proof systems.</p><p><strong>American AI companies face choice:</strong></p><ul><li><p>Meet international verification standards to access global markets</p></li><li><p>Operate only in US market with no verification requirements</p></li></ul><p><strong>Market reality:</strong> Companies choose global markets. They implement verification infrastructure to meet international requirements. US market gets verified AI not because US government required it but because international markets did.</p><p><strong>The competitiveness argument inverts:</strong> The framework claims verification requirements hurt competitiveness. Reality: lack of verification infrastructure hurts competitiveness by forcing American companies to meet inconsistent international requirements without consistent domestic verification infrastructure to leverage.</p><h2>V. The Remaining Role of Government</h2><p>The White House framework treats government's role as removing barriers and preventing regulation. This is not policy. This is capitulation. The framework systematically dismantles every mechanism through which AI safety could be verified, prohibits federal infrastructure, preempts state action, mandates industry self-certification, and calls the resulting void "American AI leadership." When the catastrophic failures come, this framework will be cited as proof that government tried everything except the one thing that works: requiring systems to prove they are safe before they are deployed.</p><h3><strong>Government as Institutional Backstop</strong></h3><p>Registry infrastructure shifts regulatory function from direct oversight to institutional backstop. Government does not verify AI safety directly, exposed actors do that through insurance-backed verification. Government ensures the verification infrastructure itself remains independent and enforceable.</p><p><strong>This means:</strong></p><p><strong>1. Mandate registry verification for systems deployed in critical domains</strong></p><ul><li><p>Employment decisions affecting individual livelihoods</p></li><li><p>Credit allocation affecting financial access</p></li><li><p>Healthcare systems affecting patient safety</p></li><li><p>Public safety applications affecting community security</p></li><li><p>Educational systems affecting student opportunities</p></li><li><p>Legal proceedings affecting individual rights</p></li></ul><p><strong>2. Prevent regulatory arbitrage by requiring proof standards across jurisdictions</strong></p><ul><li><p>Systems verified in one jurisdiction are verified nationally</p></li><li><p>No jurisdiction-shopping for weaker verification requirements</p></li><li><p>Interstate coordination through consistent registry standards</p></li></ul><p><strong>3. Prohibit deployment of unverified systems in high-risk applications</strong></p><ul><li><p>Not as regulatory burden but as enforcement of verification requirement</p></li><li><p>Systems can operate in low-risk contexts without verification</p></li><li><p>High-risk deployment requires proof</p></li></ul><p><strong>4. Enforce revocation authority when systems lose standing</strong></p><ul><li><p>Registry status changes trigger regulatory action</p></li><li><p>Systems operating with expired proofs face compliance consequences</p></li><li><p>Dependent systems must respond to revocation alerts</p></li></ul><p><strong>5. Support creation of independent verification infrastructure</strong></p><ul><li><p>Through industry pools that fund verification without creating per-client dependencies</p></li><li><p>Through insurance mechanisms that align verification with risk assessment</p></li><li><p>Not through creation of new regulatory agencies but through support for private verification infrastructure</p></li></ul><p><strong>Government becomes the guarantor that market-based verification remains rigorous rather than the primary verifier.</strong></p><p>When insurance companies demand adversarial testing, when enterprises require continuous verification, when investors price based on proof decay&#8212;government ensures those mechanisms cannot be circumvented through regulatory shopping or voluntary compliance.</p><h3><strong>What Government Must Not Do</strong></h3><p><strong>Do not create new AI-specific regulatory agencies.</strong> The framework is correct that sector-specific expertise exists in FDA, SEC, FAA, FTC. The error is assuming those agencies have verification capacity. Provide them with registry infrastructure they can leverage.</p><p><strong>Do not attempt to conduct technical AI verification through government agencies.</strong> Government lacks capacity and cannot operate at the speed AI evolution requires. Enable private verification infrastructure through insurance-backed mechanisms.</p><p><strong>Do not preempt state action that requires registry verification.</strong> States exercising traditional police powers to protect citizens should be able to mandate verification. Federal preemption should prevent weaker state requirements, not prevent verification requirements entirely.</p><p><strong>Do not optimize for &#8220;minimally burdensome&#8221; at the expense of verification.</strong> Unverified AI deployment creates burden through catastrophic failure. Verification creates burden through testing requirements. The former burden falls on victims. The latter burden falls on deployers. This is not symmetrical.</p><h2>VI. Conclusion: Infrastructure or Catastrophe</h2><p>The White House National Policy Framework for Artificial Intelligence treats AI safety as a problem that will be solved through innovation, industry self-certification, and existing legal frameworks. It will not be.</p><p>The framework explicitly prohibits creation of verification infrastructure while preempting states from building it themselves. The result is a governance vacuum where no jurisdiction can require safety verification before deployment.</p><p>This is not sustainable. What will happen instead:</p><p><strong>Scenario 1: Proactive Infrastructure (Unlikely Given Current Framework)</strong></p><ul><li><p>Insurance markets recognize unpriced systemic risk</p></li><li><p>Enterprises demand verification in procurement</p></li><li><p>Private verification infrastructure emerges through market pressure</p></li><li><p>Government eventually mandates what markets already depend on</p></li><li><p>Catastrophic failures are prevented or limited</p></li></ul><p><strong>Scenario 2: Reactive Infrastructure (Likely Given Current Framework)</strong></p><ul><li><p>AI systems deploy without verification</p></li><li><p>Catastrophic failure occurs (medical, financial, safety, educational)</p></li><li><p>Liability is enormous</p></li><li><p>Insurance markets demand verification retroactively</p></li><li><p>Verification infrastructure is built after harm</p></li><li><p>Government mandates verification after public pressure</p></li><li><p>Subsequent failures are prevented, initial harm was unnecessary</p></li></ul><p><strong>Scenario 3: Regulatory Capture Completion (Possible If Framework Passes As Written)</strong></p><ul><li><p>Federal framework preempts state action</p></li><li><p>Federal government refuses to create verification infrastructure</p></li><li><p>Industry self-certifies through captured standards bodies</p></li><li><p>Multiple catastrophic failures occur</p></li><li><p>Legal liability is fragmented and slow</p></li><li><p>Insurance markets eventually force verification but timeline is measured in decades</p></li><li><p>Harm is widespread before correction</p></li></ul><p>The White House framework optimizes for Scenario 3 while claiming to enable innovation. This is not innovation enablement. This is verification abandonment followed by inevitable harm followed by reactive infrastructure development. THIS is gross negligence.</p><h3><strong>The Trustable Position</strong></h3><p>We will build registry infrastructure regardless of government action. Insurance markets will demand it. Enterprises will require it. International competitiveness will force it. The only question is timeline.</p><p>Government can accelerate this timeline by:</p><ul><li><p>Mandating registry verification in critical domains</p></li><li><p>Supporting insurance-backed verification infrastructure</p></li><li><p>Providing existing regulators with registry access</p></li><li><p>Preventing regulatory arbitrage across jurisdictions</p></li><li><p>Making regulatory failure visible when verification infrastructure exists but is unused</p></li></ul><p>Or government can delay this timeline by:</p><ul><li><p>Prohibiting creation of verification infrastructure</p></li><li><p>Preempting state requirements</p></li><li><p>Optimizing for &#8220;minimal burden&#8221; over safety verification</p></li><li><p>Treating industry self-certification as sufficient</p></li></ul><p>Either way, the infrastructure will exist. Either it exists before catastrophic failure or after.</p><p>The framework chooses after.</p><p>We are building for before.</p><h3><strong>No Proof, No Deployment</strong></h3><p>This is not a regulatory position. It is an engineering requirement. Systems that cannot produce verifiable evidence of safety under adversarial testing should not be deployed in contexts where they can destroy human value.</p><p>The White House framework treats this as &#8220;burden.&#8221; We treat it as minimum viable safety standard.</p><p>The difference is not technical. It is philosophical. The framework assumes AI systems are safe until proven harmful. We assume AI systems are unsafe until proven otherwise through adversarial verification.</p><p>History will determine which assumption was correct through empirical demonstration. The costs of being wrong are not symmetrical.</p><p>When the first catastrophic AI failure occurs, the question will be: Did verification infrastructure exist that could have prevented this?</p><p>If the answer is &#8220;No, the White House framework prohibited its creation,&#8221; that is one kind of failure.</p><p>If the answer is &#8220;Yes, but government chose not to require it,&#8221; that is a different kind of failure.</p><p>If the answer is &#8220;Yes, and the system was deployed despite failing verification,&#8221; that is criminal liability rather than regulatory failure.</p><p><strong>The registry infrastructure makes all three answers visible and actionable.</strong></p><p>That is its purpose. Not to regulate AI development. Not to slow innovation. To make safety verifiable, continuous, and enforceable so that when failure occurs, responsibility is clear and correction is possible.</p><p>The White House framework prevents that clarity. It optimizes for opacity.</p><p>We are building for transparency.</p><p>The market will decide which approach serves American interests.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/when-government-abdicates-a-complete?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/when-government-abdicates-a-complete?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ys7I!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff945025d-7e04-4d2b-9167-83917b4c727f_1002x875.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ys7I!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff945025d-7e04-4d2b-9167-83917b4c727f_1002x875.png 424w, https://substackcdn.com/image/fetch/$s_!ys7I!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff945025d-7e04-4d2b-9167-83917b4c727f_1002x875.png 848w, https://substackcdn.com/image/fetch/$s_!ys7I!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff945025d-7e04-4d2b-9167-83917b4c727f_1002x875.png 1272w, https://substackcdn.com/image/fetch/$s_!ys7I!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff945025d-7e04-4d2b-9167-83917b4c727f_1002x875.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ys7I!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff945025d-7e04-4d2b-9167-83917b4c727f_1002x875.png" width="1002" height="875" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f945025d-7e04-4d2b-9167-83917b4c727f_1002x875.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:875,&quot;width&quot;:1002,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:112010,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.trustable.blog/i/191633224?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff945025d-7e04-4d2b-9167-83917b4c727f_1002x875.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ys7I!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff945025d-7e04-4d2b-9167-83917b4c727f_1002x875.png 424w, https://substackcdn.com/image/fetch/$s_!ys7I!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff945025d-7e04-4d2b-9167-83917b4c727f_1002x875.png 848w, https://substackcdn.com/image/fetch/$s_!ys7I!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff945025d-7e04-4d2b-9167-83917b4c727f_1002x875.png 1272w, https://substackcdn.com/image/fetch/$s_!ys7I!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff945025d-7e04-4d2b-9167-83917b4c727f_1002x875.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p>]]></content:encoded></item><item><title><![CDATA[The CISO Myth: The Anti-Trust Patterns Inside Hospitals]]></title><description><![CDATA[How compliance-first security erodes trust, care, and capacity.]]></description><link>https://www.trustable.blog/p/the-ciso-myth-the-anti-trust-patterns</link><guid isPermaLink="false">https://www.trustable.blog/p/the-ciso-myth-the-anti-trust-patterns</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Fri, 30 Jan 2026 12:03:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/xLZXcddJ51U" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/the-ciso-myth-the-anti-trust-patterns?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/the-ciso-myth-the-anti-trust-patterns?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p><h1>The Anti-Trust Patterns Inside Hospitals</h1><p>Coercion, extraction, and impunity in clinical security design</p><p>Hospitals do not usually fail because people stop caring.</p><p>They fail because systems are built that quietly make caring unsustainable.</p><p>By the time outcomes collapse, the damage has already been normalized into workflows, dashboards, and executive narratives about &#8220;efficiency,&#8221; &#8220;compliance,&#8221; and &#8220;necessary tradeoffs.&#8221; Trust does not disappear all at once. It is extracted, coerced, and exhausted over time, until what remains is a brittle shell that still looks operational from the outside.</p><p>Healthcare security is not immune to this. In many organizations, it has become one of the most efficient trust-eroding machines in the building.</p><h2>The Shape of Anti-Trust</h2><p>Anti-trust is not simply the absence of trust. It is an active pattern.</p><p>It emerges when systems are designed in ways that force people to choose between doing their job and obeying the system. It grows when friction is imposed downward and c&#8230;</p>
      <p>
          <a href="https://www.trustable.blog/p/the-ciso-myth-the-anti-trust-patterns">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The Quiet Breach at CISA]]></title><description><![CDATA[Authority, AI, and the collapse of restraint at the nation&#8217;s cyber defense agency.]]></description><link>https://www.trustable.blog/p/the-quiet-breach-at-cisa</link><guid isPermaLink="false">https://www.trustable.blog/p/the-quiet-breach-at-cisa</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Thu, 29 Jan 2026 23:01:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!7kni!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f5ce35a-7bf4-445a-89e6-ef0877bca1c1_630x420.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/the-quiet-breach-at-cisa?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/the-quiet-breach-at-cisa?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><blockquote><p>I am interrupting our scheduled series about the <a href="https://www.trustable.blog/publish/post/186248307">Healthcare CISO</a> to bring you a shining example of Trust collapse in action. </p></blockquote><h1>The Cybersecurity Chief and the Upload Button</h1><p>When trust collapses, it rarely does so with a bang. It does it with a mouse click and file select, apparently.</p><p>There are scandals that feel cinematic. And then there are scandals that feel structural. This one is the latter.</p><p>According to reporting by <a href="https://arstechnica.com/tech-policy/2026/01/us-cyber-defense-chief-accidentally-uploaded-secret-government-info-to-chatgpt/">ARS Technica</a>, the acting head of the Cybersecurity and Infrastructure Security Agency uploaded sensitive government material into a public instance of ChatGPT last summer. The material was marked &#8220;for official use only,&#8221; which is bureaucracy-speak for information that is not classified but is explicitly restricted from public release. At least four documents containing contracting and cybersecurity information triggered multiple automated security alerts in the first week of August alone.</p><p>This is not a story about one man making a mistake. It is a story about institutional incoherence, authority without literacy, and a government that keeps confusing deployment with understanding.</p><h2>What &#8220;Upload&#8221; Actually Means</h2><p>Let&#8217;s be precise about what happened here. When you paste text into the public version of ChatGPT, you are not sending it to a secure vault. You are feeding it into a training surface used by hundreds of millions of users worldwide. The data becomes part of OpenAI&#8217;s ecosystem. The company is transparent about this: information you provide may be used to improve the model&#8217;s responses for other users.</p><p>DHS had already built DHSChat, an internal AI chatbot that operates within a secure, closed environment specifically designed to prevent user inputs from leaving federal networks. Data from DHSChat is not used to train external models. The tool was developed after extensive privacy impact assessments, with guardrails established through collaboration with cloud, cybersecurity, privacy, and civil rights experts across the department.</p><p>DHSChat was available to roughly 19,000 DHS headquarters employees at the time of the incident. It was designed for exactly the kind of work Madhu Gottumukkala, the acting CISA director, was attempting to do: summarizing contracting documents, processing internal material, generating analysis without exposing sensitive information to external systems.</p><p>Gottumukkala requested and received special permission to use ChatGPT shortly after arriving at CISA in May 2025. By May 2025, DHS had restricted access to commercial generative AI systems, directing employees to use internal tools. Most DHS employees could not access public AI platforms. For good reason.</p><p>But hierarchy substituted itself for judgment. Authority became its own justification.</p><h2>The Permission Slip Problem</h2><p>One anonymous official characterized the sequence bluntly: &#8220;He forced CISA&#8217;s hand into making them give him ChatGPT, and then he abused it.&#8221;</p><p>This is the first structural failure. Why was special permission granted at all?</p><p>This was not a junior analyst cutting corners because internal tools were slow or cumbersome. This was the top cyber defense official in the country insisting on access to a tool his own agency had deemed unsafe for general use. That is not innovation. That is hierarchy performing exemption from the rules it enforces on others.</p><p>Authority is not competence. Access is not literacy.</p><p>Following the incident, Gottumukkala held meetings with senior DHS and CISA officials, including legal and information security chiefs, to review the uploads and discuss the handling of sensitive material. This is what damage control looks like when the person who needs controlling is the one in charge of the controls.</p><p>The permission structure here reveals something corrosive. When leaders can exempt themselves from the constraints designed to protect the systems they oversee, those constraints become theater. They apply to subordinates. They dissolve for superiors. This is not governance. This is performance.</p><h2>&#8220;Modernization&#8221; as a Get-Out-of-Jail-Free Card</h2><p>When questioned about the incident, DHS spokespeople pointed to executive orders encouraging AI adoption across government. This framing treats policy as permission to bypass safety architecture.</p><p>This is the most dangerous sentence in modern governance: &#8220;We were told to deploy.&#8221;</p><p>Deployment without governance is how systems rot from the inside. AI is not a software update. It is an epistemic instrument. It absorbs what you give it, reflects it back in altered form, and redistributes risk in ways that are hard to trace and impossible to recall.</p><p>Uploading sensitive material into a public model is not a policy error. It is a category error. Treating AI like a search engine instead of a memory surface. Treating convenience like capability. Treating speed like strategy.</p><p>Once the data is in, you don&#8217;t get it back. Any information uploaded into a public version of ChatGPT is shared with OpenAI and may be used to help improve responses for other users. The material does not stay contained. It becomes part of the diffusion network.</p><p>The alternative was right there. DHSChat existed precisely to allow AI experimentation without this exposure. The tool was built to enable employees to leverage generative AI capabilities safely and securely using non-public data. It was designed for this exact use case.</p><p>Gottumukkala chose the public tool anyway.</p><h2>The Anti-Trust Pattern</h2><p>Zoom out, and a pattern emerges.</p><p>In May 2025, Gottumukkala told personnel at CISA that much of its leadership was resigning. Mass departures gut institutional memory. They signal that something inside the system has become untenable.</p><p>In June, Gottumukkala requested access to a controlled access program, an act requiring a polygraph examination. He <a href="https://www.politico.com/news/2025/12/21/cisa-acting-director-madhu-gottumukkala-polygraph-investigation-00701996">failed the polygraph</a> in the final weeks of July. Several CISA employees were subsequently placed on leave after the failed polygraph. DHS began investigating the circumstances surrounding the polygraph test and suspended six career staffers, telling them the polygraph did not need to be administered.</p><p>This is not what a high-trust system looks like. This is what happens when impunity outruns accountability.</p><p>Gottumukkala also attempted to remove CISA&#8217;s Chief Information Officer, Robert Costello, a move that other political appointees reportedly blocked. When leadership tries to remove oversight figures and faces internal resistance from within its own political layer, the dysfunction has metastasized.</p><p>Staffers reportedly called the tenure a &#8220;nightmare.&#8221; That word matters. Nightmares are not random. They are the psyche trying to warn you that something is wrong with the environment.</p><p>When leaders can make errors without consequence while subordinates absorb the blast radius, trust does not erode. It collapses. Quietly. Systemically.</p><h2>Congressional Testimony and the Performance of Confidence</h2><p>During congressional testimony in late January 2026, Gottumukkala rejected characterizations of the polygraph incident, stating he did &#8220;not accept the premise of that characterization&#8221;. This is the language of deflection masquerading as precision.</p><p>Congressional oversight exists to surface patterns. When leadership cannot articulate baseline threat forecasts, cannot maintain staff stability, cannot model the restraint its mission requires, the oversight function becomes diagnostic. It reveals the distance between institutional mandate and operational reality.</p><p>CISA exists to protect national trust surfaces: elections, infrastructure, coordination mechanisms. When its own leadership treats those surfaces casually, the danger is not just a single data leak. The danger is precedent.</p><p>If the cyber defense chief cannot model restraint around information handling, who exactly is supposed to?</p><h2>The Real Risk Isn&#8217;t ChatGPT</h2><p>To be clear, and frankly, I feel weird defending OpenAI, but: this is not about OpenAI behaving badly. OpenAI did not force anyone to upload government material. The platform operates according to its stated terms. Users agree to those terms when they use the service.</p><p>The real risk is governance theater. Leaders performing &#8220;modernization&#8221; while bypassing the very controls their agencies were built to enforce.</p><p>Cybersecurity is not about tools. It is about judgment under constraint. AI mirrors and amplifies whatever culture you put around it. In a coherent system, it has the potential to augment care. In a brittle one, it accelerates failure, it accelerates rupture.</p><p>The failure here is structural. Prior to his appointment at CISA, Gottumukkala served as the chief information officer of South Dakota under then-governor Kristi Noem, who became DHS Secretary under the Trump administration. This is a personnel pipeline, not a competency filter. Loyalty gets routed through institutional architecture as if loyalty were the same thing as capability.</p><p>It is not.</p><h2>What Collapse Looks Like Now</h2><p>No flames. No sirens. Just a quiet upload, multiple automated security alerts, an internal review, and a press statement about &#8220;Our commitment to innovation.&#8221;</p><p>A CISA spokesperson told Politico that Gottumukkala&#8217;s use of ChatGPT was &#8220;short-term and limited,&#8221; noting that he last used the tool in mid-July 2025 under an authorized temporary exception. This framing treats duration as exoneration. As if the problem was how long the exposure window stayed open rather than that it was opened at all.</p><p>Trust does not fail because of hackers alone. It fails when those in charge confuse speed with progress, permission with safety, and authority with wisdom.</p><p>The nightmare here is not that sensitive data might surface somewhere in an AI model&#8217;s training corpus. The nightmare is that the people responsible for preventing exactly that do not seem to understand why it matters.</p><p>DHS developed an entire internal AI infrastructure specifically to allow experimentation without this exposure. Privacy reviews. Security guardrails. Training protocols. A tool designed for the exact workflow Gottumukkala needed. He bypassed all of it.</p><p>And when automated systems caught the breach, when alarms fired exactly as designed, the response was not accountability. It was meetings. Reviews. Deflection. The machinery of looking serious without imposing consequences.</p><p>This is not a cybersecurity problem.</p><p>This is a governance failure.</p><p>And it is not an accident. It is a system working exactly as designed: to protect leadership from the constraints that bind everyone else. To perform competence while concentrating impunity. To demand trust while demolishing the conditions that make trust possible.</p><p>The collapse is quiet. The precedent is loud. And the people who should be listening are the ones who stopped paying attention the moment they received permission to act without restraint.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/the-quiet-breach-at-cisa?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/the-quiet-breach-at-cisa?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7kni!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f5ce35a-7bf4-445a-89e6-ef0877bca1c1_630x420.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7kni!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f5ce35a-7bf4-445a-89e6-ef0877bca1c1_630x420.png 424w, https://substackcdn.com/image/fetch/$s_!7kni!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f5ce35a-7bf4-445a-89e6-ef0877bca1c1_630x420.png 848w, https://substackcdn.com/image/fetch/$s_!7kni!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f5ce35a-7bf4-445a-89e6-ef0877bca1c1_630x420.png 1272w, https://substackcdn.com/image/fetch/$s_!7kni!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f5ce35a-7bf4-445a-89e6-ef0877bca1c1_630x420.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7kni!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f5ce35a-7bf4-445a-89e6-ef0877bca1c1_630x420.png" width="630" height="420" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9f5ce35a-7bf4-445a-89e6-ef0877bca1c1_630x420.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:420,&quot;width&quot;:630,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:37118,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.trustable.blog/i/186248307?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f5ce35a-7bf4-445a-89e6-ef0877bca1c1_630x420.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7kni!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f5ce35a-7bf4-445a-89e6-ef0877bca1c1_630x420.png 424w, https://substackcdn.com/image/fetch/$s_!7kni!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f5ce35a-7bf4-445a-89e6-ef0877bca1c1_630x420.png 848w, https://substackcdn.com/image/fetch/$s_!7kni!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f5ce35a-7bf4-445a-89e6-ef0877bca1c1_630x420.png 1272w, https://substackcdn.com/image/fetch/$s_!7kni!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f5ce35a-7bf4-445a-89e6-ef0877bca1c1_630x420.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Scott Olson/Getty Images</figcaption></figure></div><p></p>]]></content:encoded></item><item><title><![CDATA[The CISO Myth: Perimeter Guard in a Clinical World]]></title><description><![CDATA[When control replaces coherence, patients pay the price.]]></description><link>https://www.trustable.blog/p/the-ciso-myth-perimeter-guard-in</link><guid isPermaLink="false">https://www.trustable.blog/p/the-ciso-myth-perimeter-guard-in</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Mon, 26 Jan 2026 12:28:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!zRte!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F311ac515-d79c-4abc-a529-0b99c76c9632_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/the-ciso-myth-perimeter-guard-in?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/the-ciso-myth-perimeter-guard-in?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p><h1>The CISO Myth: Perimeter Guard in a Clinical World</h1><p>Why &#8220;lock it down&#8221; thinking fails where care must flow</p><p>The modern CISO was not born in a hospital.</p><p>The role emerged in the mid-1990s, after Citibank responded to a $10 million cyber theft by Vladimir Levin, through the international funds transfer system. Steve Katz became the world&#8217;s first Chief Information Security Officer, hired with two directives: &#8220;Build the best cybersecurity department in the world&#8221; and &#8220;go out and spend time with our top international banking customers to limit the damage.&#8221;</p><p>The CISO role was forged in banks, payment networks, and financial services firms where the primary asset was transactional data, the primary threat was theft, and the primary strategy was containment. Build a perimeter. Harden it. Monitor ingress and egress. Assume that anything inside the walls is trusted and anything outside is hostile.</p><p>This model worked well enough when the worst-case failure mode was fraud.</p><p>It collapses completely when the worst-case failure mode is a patient dying because care was delayed.</p><p>Healthcare inherited the CISO role wholesale, without interrogating whether its metaphors, incentives, or success criteria made sense in an environment defined by urgency, human variability, and moral stakes. The result is a category error that keeps reproducing harm.</p><h2>The Financial DNA of the CISO Role</h2><p>Finance optimized security around three assumptions:</p><p>Assets are static. Money and records sit still until moved.</p><p>Workflows are predictable. Transactions follow defined paths.</p><p>Delay is tolerable. Seconds matter for fraud detection, but minutes rarely kill anyone.</p><p>Hospitals violate all three assumptions.</p><p>Patients move. Clinicians move. Devices move. Data moves continuously across wards, shifts, and contexts. Workflows are adaptive, improvisational, and deeply contingent on who is available, what is happening, and how sick someone is right now. Delay is not an inconvenience. It is a clinical variable.</p><p>The first CISO era from 1995 to 2000 focused on passwords and log-on security, perimeter defenses like firewalls and intrusion detection systems. Early CISO functions put importance on technical security controls and responses towards incidents. The role was narrow in scope and scale, born from the first instances of hacking in financial services.</p><p>Yet we still deploy security controls designed for static assets, predictable paths, and tolerable latency, then act surprised when clinicians route around them.</p><h2>The Perimeter Fantasy in a Porous Environment</h2><p>Hospitals do not have clean perimeters.</p><p>They have open doors, emergency intakes, visiting hours, rotating staff, contractors, students, medical devices that predate modern security models, and patients who show up unannounced in distress.</p><p>Perimeter metaphors assume a stable &#8220;inside&#8221; and a dangerous &#8220;outside.&#8221; Hospitals are all inside. Or more accurately, they are all interface.</p><p>Every login is an interface. Every alert is an interface. Every timeout is an interface. Every system outage is an interface.</p><p>Security that assumes it can simply fence off risk misunderstands where risk actually lives. In healthcare, risk lives in friction, confusion, and delay. It lives in the moment a nurse cannot log in quickly enough. It lives in the extra step that breaks a mental flow during triage. It lives in the authentication failure that forces paper notes that will later be re-entered incorrectly.</p><p>Perimeters do not protect care. They constrict it.</p><h2>How Clinicians Become the Enemy</h2><p>When security is imposed instead of designed, clinicians are positioned as threats.</p><p>Not intentionally. Structurally.</p><p>Controls that prioritize compliance over usability teach clinicians a quiet lesson: the system does not understand your work. Faced with that mismatch, clinicians do what humans always do in high-stakes environments. They adapt.</p><p>They share credentials. They reuse passwords. They leave sessions open. They write things down. They bypass alerts.</p><p>The evidence is overwhelming.</p><p>Multiple studies show that well over half of healthcare professionals admit to sharing credentials. 46% of employees share work-related passwords for accounts used by multiple coworkers. Password sharing is identified as one of the most common HIPAA violations. Yet healthcare staff continue to share credentials because every minute counts in critical care.</p><p>Research on workarounds to computer access in healthcare organizations documents that &#8220;workarounds are the norm, rather than the exception.&#8221; They not only go unpunished, they go unnoticed in most settings and are often taught as correct practice.</p><p>Clinicians offer their logged-in session to the next clinician as a &#8220;professional courtesy&#8221; even during security training sessions. Nurses circumvent the need to log out of computers on wheels by placing sweaters or large signs with their names on them. Staff defeat proximity sensors by putting styrofoam cups over detectors. The most junior person on staff is asked to keep pressing the space bar on everyone&#8217;s keyboard to prevent timeouts.</p><p>These behaviors are often framed as &#8220;noncompliance&#8221; or &#8220;human error.&#8221; That framing is backwards.</p><p>Workarounds are not rebellion. They are rescue.</p><p>They are clinicians trying to preserve patient care in systems that were never designed to support it under real conditions. A security program that treats these adaptations as adversarial behavior is misdiagnosing the problem.</p><p>When clinicians must choose between following security rules and treating a patient, they will choose the patient every time. Any security model that does not anticipate this is unsafe by construction.</p><h2>The Latency Tax</h2><p>&#8220;Lock it down&#8221; thinking imposes a latency tax.</p><p>Each control adds seconds. Each reauthentication adds cognitive load. Each poorly tuned alert steals attention. Individually, these costs look trivial. Collectively, they are measurable, compounding, and dangerous.</p><p>Studies and clinician reports show authentication overhead consuming <strong>t</strong>ens of minutes per shift, and in some cases over an hour per day. One clinician mentioned that his dictation system has a 5-minute timeout that requires a password. During a 14-hour day, he spends almost 1.5 hours logging in.</p><p>Complex passwords and added authentication requirements are there to protect patient data. However, they ironically lead to decreased productivity and increased security risks. Managing and resetting complex passwords disrupts clinical workflows and consumes valuable time that would otherwise be spent providing care. This leads to burnout. 21% of nurses note too many administrative tasks such as documentation, charting, and electronic health records as a top cause of burnout.</p><p>In time-sensitive environments, latency is not evenly distributed. It hits hardest during peak stress: shift changes, emergencies, understaffed nights, system degradation. That is exactly when security controls are least forgiving and clinicians are least able to absorb friction.</p><p>This is how well-intentioned security creates systemic brittleness. The system appears controlled under ideal conditions and fails catastrophically under pressure.</p><p>A design that only works when everything is going well is not a security design. It is a demo.</p><h2>Control Versus Coherence</h2><p>The core failure is philosophical.</p><p>Finance-oriented security optimizes for control. Healthcare requires coherence.</p><p>Control asks: Can we constrain behavior?<br>Coherence asks: Can the system hold together under stress?</p><p>Control treats humans as liabilities to be managed.<br>Coherence treats humans as adaptive components to be supported.</p><p>Control assumes obedience produces safety.<br>Coherence recognizes that understanding produces safety.</p><p>In healthcare, safety emerges from alignment between people, tools, and context. Security that disrupts that alignment undermines the very thing it claims to protect.</p><p>This is why zero-trust absolutism, imported without translation, so often backfires in hospitals. Traditional security follows a &#8220;castle-and-moat&#8221; approach, trusting everything inside the network. Zero trust treats every access request as potentially hostile, requiring verification regardless of location or network status.</p><p>The concept makes sense in theory. In practice, healthcare organizations face unique challenges. Health IT leaders realize their cybersecurity strategies should not tax already time-strapped clinicians by requiring them to sign into multiple applications every day. When done well, zero-trust policies and controls should work successfully behind the scenes with no noticeable impact on clinicians.</p><p>But implementation requires careful balance. Healthcare is an industry with one of the highest numbers of connected devices. Most clinical procedures rely on several medical and IoT devices that instantly sync data to medical databases. For healthcare organizations, device functionality comes first. Safety comes second. All devices must work.</p><p>Zero trust in a care environment becomes zero flow without careful implementation. Zero flow becomes zero safety.</p><p>The difference between implementing zero trust in a healthcare setting versus other industries is that instead of just protecting devices and data, the goal of clinical zero trust is also to protect the physical workflows of care delivery, including the people and processes responsible. Healthcare organizations will likely operate in a hybrid zero-trust/perimeter-based mode indefinitely while modernizing their infrastructure.</p><h2>The Signal You Cannot Ignore</h2><p>Here is the signal that matters more than any audit finding:</p><p>A system clinicians work around is already unsafe.</p><p>Not insecure. Unsafe.</p><p>Workarounds are evidence of design failure, not user failure. They are leading indicators of where security is misaligned with care. They tell you exactly where friction is accumulating and where risk is being displaced rather than reduced.</p><p>Treating workarounds as policy violations misses their diagnostic value. They are telling you exactly where the system cannot carry the load you have placed on it.</p><p>When researchers studied clinicians doing their work, they found that &#8220;workarounds to cyber security are the norm, rather than the exception.&#8221; Clinicians acknowledge that effective security controls are important, especially in an essential service like healthcare. Unfortunately, all too often with these tools, clinicians cannot do their job. The medical mission trumps the security mission.</p><p>These are not terrorists or black hat hackers. These are clinicians trying to use the computer system for conventional healthcare activities. Mostly, the idea is that computer and security experts rarely happen to also be clinical care experts.</p><p>SIGNAL exists to surface this truth. To treat friction as data. To instrument the gap between how systems are supposed to work and how they actually work when people&#8217;s lives are on the line.</p><h2>Redefining the CISO Role Again</h2><p>If the CISO is still operating as a perimeter guard, they are guarding the wrong thing.</p><p>The job is no longer to keep threats out at all costs. The job is to ensure that care can continue safely even when things go wrong. That requires abandoning metaphors that treat hospitals like vaults and embracing models that treat them like living systems.</p><p>The CISO role has evolved significantly since 1995. By 2000, the CISO&#8217;s responsibilities extended beyond corporate boundaries to include e-business partnerships, mirroring institutional changes. The role changed to focus on enterprise risk, governing, privacy, board-level engagement, and business needs.</p><p>Steven Katz stated that the role is about business risk and cybersecurity is a way to assess business risk, &#8220;not an end in itself.&#8221; Key skills became organizational leadership, strategic thinking, communication with boards, budget management, vendor relations, business processes, regulatory overview, and the ability to merge security outcomes with business needs.</p><p>In healthcare, this evolution must go further.</p><p>The healthcare CISO must understand that clinical workflow is not a constraint to work around. It is the thing being protected. Security controls must be evaluated not just for strength, but for survivability under the conditions where they will actually be used: understaffed emergency departments, shift changes, system degradation, crisis scenarios.</p><p>The CISO myth persists because it is familiar and legible to boards. But familiarity is not fitness. Legibility is not safety.</p><p>Healthcare does not need better walls. It needs systems that bend without breaking, controls that degrade gracefully, and security leaders who understand that friction in care pathways is not a nuisance. It is a warning.</p><p>The perimeter guard was never the right archetype.</p><h2>The Provocation</h2><p>Financial services optimized security for an environment where minutes of delay might mean lost revenue. Healthcare operates in an environment where minutes of delay can mean death.</p><p>The research is unambiguous. 73% of healthcare professionals violate security policies not out of malice but out of necessity. 45 minutes per shift consumed by authentication overhead. 1.5 hours per day spent logging into systems with aggressive timeouts. Workarounds documented as the norm across every healthcare setting studied.</p><p>These are not implementation failures. These are design failures.</p><p>Security models built for static assets, predictable workflows, and tolerable latency will always fail in environments characterized by movement, improvisation, and urgency.</p><p>The CISO who continues to optimize for perimeter defense in a clinical world is solving the wrong problem. The walls are strong, but the patients are dying inside them because care cannot flow through the checkpoints fast enough.</p><p>Healthcare security leaders must accept that their role is fundamentally different from their counterparts in finance. The worst-case scenario is not a data breach. It is a patient dying because the security architecture made care impossible to deliver.</p><p>Zero trust can work in healthcare, but only when implemented with clinical zero trust principles: protecting workflows, not just data. Maintaining care delivery under stress, not just preventing unauthorized access. Treating clinicians as the adaptive components that keep the system functioning, not as the security vulnerabilities to be constrained.</p><p>A system that clinicians must fight to use is not secure. It is unsafe. The workarounds prove it. The latency proves it. The burnout proves it. The deaths prove it.</p><p>The only question is whether CISOs will recognize that guarding the perimeter is not the same as protecting care.</p><p>The choice is binary.</p><p>* a deck of this article is available for paid subscribers: </p><p></p>
      <p>
          <a href="https://www.trustable.blog/p/the-ciso-myth-perimeter-guard-in">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Patient Outcomes Are Trust Outcomes]]></title><description><![CDATA[Redefining What &#8220;Success&#8221; Means in Healthcare Security]]></description><link>https://www.trustable.blog/p/patient-outcomes-are-trust-outcomes</link><guid isPermaLink="false">https://www.trustable.blog/p/patient-outcomes-are-trust-outcomes</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Fri, 23 Jan 2026 12:07:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!iKig!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b47e8d8-2c14-44bc-9827-f409a719ba85_1440x1440.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/patient-outcomes-are-trust-outcomes?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/patient-outcomes-are-trust-outcomes?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2>Introduction</h2><p>Healthcare security still believes it wins when nothing explodes.</p><p>No breach. No regulator knocking. No angry board call.</p><p>Clean audit. Green dashboard. Everyone exhales.</p><p>This definition of success is not just outdated. It is actively dangerous.</p><p>Because patients do not experience &#8220;security posture.&#8221; They experience care. And when security fails in healthcare, the harm does not show up as a line item. It shows up as delayed diagnosis, wrong treatment, abandoned follow-up, and the quiet erosion of trust that determines whether people ever come back.</p><p>Patient outcomes are trust outcomes. If your security program cannot see that, it is blind by design.</p><h2><strong>The Compliance Mirage</strong></h2><p>HIPAA taught an entire industry to confuse legality with safety.</p><p>HIPAA compliance answers a narrow question: Did you follow prescribed controls to protect regulated data? It does not ask whether your systems remain usable under stress. It does not ask whether patients can access care during failure. It does not ask whether trust survives the incident.</p><p>A hospital can be fully HIPAA-compliant and still produce catastrophic patient harm.</p><p>A hospital can encrypt everything perfectly and still strand clinicians without records.<br> A hospital can pass every audit and still permanently lose patient confidence.<br> A hospital can &#8220;do everything right&#8221; and still hemorrhage outcomes.</p><p>Compliance measures behavior. Patients experience consequences.</p><p>Security teams have been rewarded for optimizing the former while ignoring the latter. That optimization is now lethal.</p><h2><strong>From Data Protection to Value Protection</strong></h2><p>The core error is subtle but foundational.</p><p>Healthcare security has defined its object as data about patients.</p><p>But patients do not entrust hospitals with data. They entrust them with value.</p><p>They entrust their time when they wait and comply. They entrust their bodies when they consent to treatment. They entrust their futures when they disclose honestly. They entrust their dignity when they become vulnerable.</p><p>Data is just one carrier of that value. When security fixates on the container and ignores the content, it protects the shell while the substance degrades.</p><p>Protecting data about patients is necessary. Protecting value for patients is non-optional.</p><p>That is the shift.</p><h2><strong>Trust Failures Are Clinical Failures</strong></h2><p>Trust is often dismissed as a soft concept because it is rarely operationalized. That dismissal collapses the moment you trace trust failures to patient harm.</p><p>A system outage during intake is not neutral. It delays diagnosis.<br> A corrupted record is not clerical. It produces misdiagnosis.<br> An opaque breach response is not PR-related. It causes abandonment.</p><p>Patients who lose trust behave differently in ways that are both measurable and dangerous. They withhold information. They delay seeking care. They disengage from treatment plans. They avoid follow-up. They leave the system entirely.</p><p>These behaviors precede morbidity. They precede mortality. They precede cost spikes that leadership pretends are &#8220;unrelated.&#8221;</p><p>Trust loss is not an emotional inconvenience. It is a causal mechanism.</p><h2><strong>The Mechanisms: How Trust Erosion Kills</strong></h2><p>A <a href="https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0170988">meta-analysis</a> examining trust in healthcare professionals found a small to moderate correlation between trust in healthcare professionals and health outcomes (r = 0.24, 95% CI: 0.19&#8211;0.29). This correlation is significant because trust operates as a mediator for measurable clinical behaviors.</p><p>In a study of <a href="https://www.sciencedirect.com/science/article/abs/pii/S0277953608006734">480 adult patients with type 2 diabetes</a>, researchers found that patients who trust their physicians more demonstrate stronger self-efficacy and outcome expectations, which, in turn, drive better treatment adherence and objective health outcomes. The mechanism is not mysterious. Trust functions as the substrate upon which therapeutic response is built.</p><p>When trust erodes, the entire causal chain fractures.</p><p>Patients with greater trust in <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC7153104/">provider confidentiality</a> are significantly less likely to withhold important health information. Conversely, patients who experience trust violations engage in protective behaviors that compromise their care. In general <a href="https://vercara.digicert.com/news/vercara-research-75-of-u-s-consumers-would-stop-purchasing-from-a-brand-if-it-suffered-a-cyber-incident">consumer research</a>, 66% say they wouldn&#8217;t trust a company after a breach, and 75% say they&#8217;d sever ties after a cyber incident.</p><p>A <a href="https://www.sciencedirect.com/science/article/abs/pii/S0167811625000047">study using difference-in-differences methods</a> found that patients affected by a healthcare data breach were less likely to visit hospitals in the months following the breach. Up to 40% of patients consider switching providers after a breach. The withdrawal is not temporary. It is structural.</p><p>These are not sentiment surveys. These are behavioral predictors with direct clinical consequences.</p><h2><strong>The Economic Weight of Abandonment</strong></h2><p>Patient nonadherence costs the U.S. healthcare system between $100 billion and $300 billion annually due to avoidable hospitalizations, emergency room visits, and preventable complications. Nonadherence represents 3% to 10% of total U.S. healthcare costs. This translates to approximately 125,000 deaths per year.</p><p>What does this have to do with trust?</p><p>Poor communication and lack of trust can undermine adherence. The quality of the patient-provider relationship is crucial. When trust in the healthcare system deteriorates, adherence collapses as a downstream consequence.</p><p>Patients in lower socioeconomic brackets already struggle with medication costs, which leads to rationing or skipping doses. Add trust erosion from a security failure, and the abandonment accelerates. Patients withhold crucial health information from providers. They delay seeking medical care. They provide inaccurate information to protect their privacy. They avoid participating in medical research or health information exchanges.</p><p>This is how security failures metabolize into mortality.</p><p>The mechanism travels like this: data breach &#8594; trust violation &#8594; information withholding &#8594; diagnostic error &#8594; treatment failure &#8594; preventable death.</p><p>Security teams measure the breach. Who counts the bodies?</p><h2><strong>Abandonment Is a Security Failure</strong></h2><p>One of the least acknowledged harms of healthcare security failure is abandonment.</p><p>When systems go dark, patients are left without orientation. No records. No clarity. No guidance. No explanation of what happened or what to do next.</p><p>Abandonment produces fear. Fear produces avoidance. Avoidance produces worse outcomes.</p><p>Security teams rarely count abandonment because it does not trigger an alert. But patients count it immediately. They feel it in the silence when portals fail, when clinics cannot answer, when no one can tell them whether their treatment plan still exists.</p><p>During the <a href="https://www.beckershospitalreview.com/healthcare-information-technology/cybersecurity/the-commonspirit-ransomware-attack-1-year-later/">CommonSpirit Health attack in 2022</a>, patients experienced exactly this terror. The second-largest nonprofit hospital chain in the United States went offline. Patients could not access their records. Pharmacies could not verify prescriptions. Scheduled surgeries were delayed. Emergency departments diverted ambulances.</p><p>The patients trapped in that chaos did not experience a &#8220;technical incident.&#8221; They experienced abandonment by a system they trusted to hold them.</p><p>If your incident response leaves patients alone in uncertainty, you did not &#8220;contain&#8221; the incident. You amplified it.</p><h2><strong>Trust Is a Leading Indicator</strong></h2><p>Healthcare loves lagging indicators. Mortality rates. Readmission rates. Length of stay.</p><p>By the time those metrics move, harm is already entrenched.</p><p>Trust is different. Trust is a leading indicator.</p><p>Leading indicators in healthcare are forward-looking measurements that give teams early warning of likely outcomes. They focus on inputs and processes that can be influenced now. Lagging indicators are retrospective and outcomes-based. They are easy to measure but difficult to improve or influence.</p><p>Trust friction shows up early as missed appointments, hesitation, second-guessing, withdrawal, and anger directed at frontline staff.</p><p>These are not behavioral quirks. They are system health signals.</p><p><a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC5814324/">No-show rates vary widely by setting, but 20% is common in many outpatient contexts</a> with scheduled appointments. In mental health services, <a href="https://www.cambridge.org/core/journals/advances-in-psychiatric-treatment/article/why-dont-patients-attend-their-appointments-maintaining-engagement-with-psychiatric-services/5E3E809B3FC76807765328FC1F05CB7D">up to 50% of patients who miss appointments drop out of scheduled care</a>. A qualitative study exploring why patients miss appointments found that the reasons center on three types of issues: emotions, perceived disrespect, and not understanding the scheduling system.</p><p>The norm of reciprocity suggests that a patient who feels disrespected would feel no obligation to respect the system. This construct, respect, underlies the association of waiting times, satisfaction, and nonattendance. Patients who feel unheard, rushed, or judged during healthcare interactions disengage from the system altogether, leading to long-term avoidance of care.</p><p>Security incidents violate respect structurally. When a hospital cannot explain what happened to patient data, cannot assure safety, cannot restore access, the disrespect is absolute. Patients respond predictably. They stop showing up.</p><p>This is why trust metrics matter more than compliance metrics. Trust friction precedes outcome collapse. It gives healthcare organizations time to intervene before the harm becomes irreversible.</p><p>SIGNAL exists to surface exactly this layer. To instrument emotional safety, coherence, and confidence before outcomes collapse. To detect where systems make people feel unsafe, long before failure becomes irreversible.</p><p>Ignoring trust because it feels subjective is like ignoring pain because it does not show up on imaging. It is malpractice masquerading as rigor.</p><h2><strong>Redefining Security &#8220;Success&#8221;</strong></h2><p>If patient outcomes are trust outcomes, then healthcare security must redefine success.</p><p>Success is not &#8220;no incidents.&#8221;<br> Success is survivability under incident conditions.</p><p>Success is not &#8220;data remained encrypted.&#8221;<br> Success is patients still receiving care.</p><p>Success is not &#8220;we followed the playbook.&#8221;<br> Success is clinicians not improvising dangerously.</p><p>Success is not &#8220;we disclosed within 72 hours.&#8221;<br> Success is patients understanding what happened and what comes next.</p><p>This requires a different scoring system. One that measures time to clinical continuity, integrity under degradation, clarity of communication, patient confidence post-incident, and clinician trust in the system.</p><p>These are not abstract ideals. They are operational requirements for care-safe security.</p><p>Consider what happened at the University of Vermont Medical Center in 2020. The ransomware attack disabled chemotherapy infusion technology. Oncology had to create command centers to oversee ethical triage of systemic therapies. Patients were stratified into tiers: curative-intent urgent care, treatments safe to delay 1-2 weeks, and treatments safe to delay at least 2 weeks.</p><p>This is what security failure looks like when measured in clinical posture. Not &#8220;systems offline.&#8221; Lives prioritized under scarcity.</p><p>The oncology team did not measure success by how quickly they restored systems. They measured success by whether patients with time-sensitive cancer treatments survived the artificial resource constraint created by a security failure.</p><p>That is the standard healthcare security should adopt.</p><h2><strong>The Signal Shift</strong></h2><p>This is the inversion healthcare security has been avoiding:</p><ul><li><p>From protecting data about patients to protecting value for patients.</p></li><li><p>From perimeter defense to circulatory resilience.</p></li><li><p>From compliance theater to outcome stewardship.</p></li><li><p>From technical risk management to clinical risk ownership.</p></li></ul><p>Once you make this shift, certain truths become unavoidable.</p><p>Security controls that degrade care are unsafe. Architectures that fail silently are unethical. Incident responses that prioritize optics over patients are illegitimate.</p><p>And CISOs who measure success without patient outcomes are flying blind.</p><h2><strong>The Provocation</strong></h2><p>Healthcare security can continue congratulating itself for clean audits while trust erodes quietly in waiting rooms.</p><p>Or it can accept what the data, the deaths, and the patients have already made clear.</p><p>Patient outcomes are trust outcomes.</p><p>Every availability failure is a dignity failure. Every integrity failure is an accountability failure. Every opaque response is an agency failure.</p><p>These map directly to the Trust Envelope Model. Dignity requires that patients have access to the care they need. Accountability requires that systems can be relied upon to maintain accurate, trustworthy information. Agency requires that patients understand what is happening to them and be able to take informed action.</p><p>When ransomware disables chemotherapy scheduling, Dignity collapses. When corrupted records produce wrong allergy information, Accountability collapses. When breach notifications can legally arrive up to 60 days after discovery, with no clarity about what patients should do next, Agency collapses.</p><p>Trust Value Management is not a philosophy layered on top of security. It is the missing control plane that healthcare has been pretending it did not need.</p><p>The research is unambiguous. The mechanisms are documented. The deaths are counted.</p><p>Between $100 billion and $300 billion in annual costs. 125,000 deaths per year. 66% trust loss after breaches. 75% patient abandonment. Up to 50% dropout from care.</p><p>These are not abstract risks. These are measured outcomes.</p><p>Healthcare security that cannot see trust as infrastructure is healthcare security that kills patients while celebrating compliance.</p><p>The only question is whether security leaders will accept that their decisions have clinical consequences. Whether they will measure trust friction as rigorously as patch compliance. Whether they will own the deaths.</p><p>The choice is binary.</p><p><em>*this article is available as a downloadable PDF for paid subscribers</em></p>
      <p>
          <a href="https://www.trustable.blog/p/patient-outcomes-are-trust-outcomes">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The CISO as Patient-Safety Actor: Why Cybersecurity Is Now a Patient-Facing Function]]></title><description><![CDATA[When uptime, integrity, and clarity determine whether care arrives on time.]]></description><link>https://www.trustable.blog/p/the-ciso-as-patient-safety-actor</link><guid isPermaLink="false">https://www.trustable.blog/p/the-ciso-as-patient-safety-actor</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Wed, 21 Jan 2026 12:29:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!YtZa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe850e0d7-a91c-4fc9-b6dc-7cca5b4233de_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/the-ciso-as-patient-safety-actor?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/the-ciso-as-patient-safety-actor?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p><h1><strong>The CISO as Patient-Safety Actor: Why Cybersecurity Is Now a Patient-Facing Function</strong></h1><p>For a long time, healthcare security lived a polite lie.</p><p>The lie was that cybersecurity was an IT concern. A back-office hygiene practice. A necessary nuisance whose job was to keep auditors calm, insurers satisfied, and billing systems upright. If it occasionally annoyed clinicians or slowed workflows, well, that was the price of safety.</p><p>But here is the thing about lies in clinical environments. They do not stay abstract. They metabolize into harm.</p><p>In modern healthcare, there is no meaningful boundary between the technical and care systems. The stack is at the bedside. The network is the hallway. The EHR is the chart in the clinician&#8217;s hand while someone is scared, in pain, and half-dressed under fluorescent lights.</p><p>That means the CISO is no longer a perimeter guard. Whether they like it or not, they are a patient-safety actor.</p><h2><strong>When Systems Fail, Patients Feel Actual Harm</strong></h2><p>A ransomware attack does not &#8220;impact operations.&#8221; Between 2016 and 2021, <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC9856685/">374 documented ransomware attacks</a> on healthcare delivery organizations affected the protected health information of 42 million patients. During these attacks, computers and electronic health records were disabled or encrypted. Clinicians were forced to document care by hand. Appointments and surgeries were delayed or canceled. Emergency departments diverted ambulances.</p><p>In 2020, a ransomware attack at the University of Vermont Medical Center disabled chemotherapy infusion technology. A nurse compared those weeks to only one experience: working in a burn unit following the Boston Marathon bombing. The oncology department lost access to individualized EMR chemotherapy plan templates that drove nursing and pharmacy processes. Infusion visit volume dropped 52% in the first week. New patients could not access diagnostic services. The hospital created command centers to oversee ethical triage of systemic therapies.</p><p>University of Minnesota School of Public Health experts estimate that <a href="https://www.voanews.com/a/ransomware-attacks-death-threats-endangered-patients-and-millions-of-dollars-in-damages/7520952.html">between 42 and 67 patients died </a>as a result of ransomware attacks between 2016 and 2021. This does not include deaths covered by private insurance.</p><p>An EHR outage does not &#8220;reduce productivity.&#8221; It blocks medication reconciliation in the ER. When ransomware forced Universal Health Services offline in 2020, a clinical staff member reported having no access to any patient files, no history, nothing. Doctors could not access X-rays or CT scans. In operating rooms, anesthesia checklists disappeared. In ICUs, vital signs went unrecorded. In emergency departments, clinicians did not know patients&#8217; allergies or the last medication administered.</p><p>A data integrity failure does not &#8220;raise compliance risk.&#8221; Hackers can alter medication details, allergy information, or diagnostic data. These changes lead to medical errors, misdiagnoses, and incorrect prescriptions. Wrong information persists in records over time, creating a continual risk of improper treatment.</p><p>Availability failures feel like abandonment.<br>Integrity failures feel like betrayal.<br>Latency feels like neglect.</p><p>Patients experience these failures viscerally. Not as headlines. Not as KPIs. As fear. As confusion. As the sickening realization that the system they trusted to hold them cannot remember who they are today.</p><h2><strong>The Spillover Effect: How One Hospital&#8217;s Breach Kills Patients Across Town</strong></h2><p>Ransomware attacks do not confine their harm to breached facilities. When hospitals go offline, neighboring facilities absorb the displaced patients. The results are measurable and lethal.</p><p>A <a href="https://pubmed.ncbi.nlm.nih.gov/37155166/">study examining the spillover effects</a> of hospital ransomware attacks documented what happens at unaffected facilities when nearby hospitals are compromised. Emergency medical services arrivals increased 35.2%. Patient volume increased 15.1%. Waiting room time increased 47.6%. Stroke code activations increased 74.6%. Confirmed strokes increased 113.6%. Cardiac arrest cases increased 81%.</p><p>These are not theoretical projections. These are deaths. Strokes that became permanent disability. Hearts that stopped beating while patients waited in overwhelmed emergency departments.</p><p>In rural areas with no backup capacity, the consequences are starker. When a ransomware attack cripples the only hospital for 50 miles, entire communities lose access to emergency care. Patients die in ambulances. Patients die at home, afraid to seek care that is no longer available.</p><p>This is what happens when cybersecurity is treated as a perimeter problem instead of a circulatory system. The failure propagates. The harm compounds. The bodies pile up.</p><h2><strong>Patient Safety Begins in Architecture Decisions Made Before the Crisis</strong></h2><p>We have spent decades pretending that patient safety stops at the bedside. That once the clinician does their job, the rest is infrastructure trivia. That fiction is no longer survivable.</p><p>Patient safety begins upstream, in architecture decisions made months or years before a crisis. It lives in how systems degrade under stress. It lives in whether clinicians can access what they need without improvising dangerous workarounds. It comes down to whether the hospital stays legible when something goes wrong.</p><p>In other words, patient safety begins in the security strategy.</p><p>Consider what happened during the CommonSpirit Health attack in 2022. CommonSpirit is the second-largest hospital chain in the United States. When ransomware forced their systems offline, ER nurses reverted to paper charting under crushing patient loads. The risk of transcription errors multiplied. Misplaced files became lethal possibilities. Medication mistakes bloomed in the chaos.</p><p>These failures were not inevitable. They were consequences of a security architecture that optimized for control rather than resilience under pressure. Systems are designed with no plan for graceful degradation. Controls that assumed perfect conditions. Incident response protocols that prioritized optics over clarity.</p><p>The CISO owns these outcomes, whether the org chart acknowledges it or not.</p><h2><strong>The CISO Myth Was Built for Credit Cards, Not Bodies</strong></h2><p>The modern CISO role was forged in finance. In environments where the primary asset was data, the primary harm was theft, and the primary goal was containment. Lock the doors. Harden the perimeter. Minimize exposure.</p><p>That logic does not survive first contact with a hospital.</p><p>Hospitals are porous by necessity. They are staffed by humans under pressure. They are full of legacy devices that cannot be patched, clinical workflows that cannot be paused, and moments where speed matters more than elegance. You cannot &#8220;lock it down&#8221; without locking patients out.</p><p>So what happens instead is predictable. Security is imposed as control rather than designed as care. Clinicians become reluctant adversaries. Workarounds bloom like mold in a damp basement. Passwords get taped under keyboards because the system demanded obedience, not understanding.</p><p>In a study of clinical informaticians, 60.4% identified disruption to workflows and services as a top challenge to cybersecurity implementation. First-shift nurses need to log in and out of multiple devices throughout the day across several locations. Authentication requirements insert latency at every step. Even with a 90-second latency, the cumulative impact on patient care is measurable.</p><p>Workarounds are defined in the literature as &#8220;informal temporary practices for handling exceptions to normal workflow.&#8221; In healthcare, they are clinicians&#8217; self-created solutions to achieving a work goal within a dysfunctional system of work processes that prevent or impede that goal.</p><p>A system that clinicians must fight to use is already unsafe.</p><p>This is not a failure of training or attitude. It is a design failure rooted in a category error. We imported domination-era security models into coherence-driven care environments and then acted surprised when they shattered under load.</p><h2><strong>The Care Delivery Chain Includes You</strong></h2><p>Healthcare leaders love flow diagrams of the care journey. Intake. Triage. Diagnosis. Treatment. Discharge. Follow-up.</p><p>Security is rarely drawn on those diagrams. Which is adorable, given how many of those steps depend entirely on secure, available, trustworthy systems.</p><p>Every authentication requirement inserts latency into intake.<br> Every poorly tuned alert interrupts diagnosis.<br> Every brittle control that fails under stress fractures treatment continuity.<br> Every opaque outage poisons discharge confidence and follow-up adherence.</p><p>These are not side effects. They are causal contributions.</p><p>If your security control delays care, you own the outcome. If your architecture collapses silently, you own the confusion. If your incident response prioritizes optics over clarity, you own the fear.</p><p>The clinical chain does not care what your org chart says.</p><h2><strong>From Risk Posture to Clinical Posture</strong></h2><p>Most CISOs are trained to speak in the language of &#8220;risk appetite.&#8221; This is a comforting abstraction. It allows executives to pretend that risk is a negotiable commodity rather than a lived experience.</p><p>Patients do not consent to your risk appetite. They consent to care under an implied trust envelope.</p><p>They consent to care. And care has a different posture. It asks different questions. Not &#8220;what exposure can we tolerate?&#8221; but &#8220;what harm are we willing to cause?&#8221;</p><p>Translating cyber risk into clinical risk is not a communications exercise. It is a moral one. It requires admitting that uptime is not just a technical metric. It is a safety metric. That data integrity is not just accuracy, but diagnostic trust. That confidentiality breaches do not just violate the law, but rupture the emotional safety required for people to seek care at all.</p><p>Compliance will never measure this. Audits cannot feel fear. Dashboards cannot register betrayal. Only patients can.</p><h2><strong>Patients Feel Security Long Before They Understand It</strong></h2><p>Trust is not a value patients articulate. It is a condition they inhabit.</p><p>When systems work, trust is invisible. When systems fail, trust collapses instantly.</p><p>The evidence is unambiguous. After a data breach, 66% of patients report losing trust in the affected organization. 75% sever ties altogether. A study of 12 California hospitals over three years found that patients who experience a healthcare data breach are significantly less likely to visit hospitals in the following months.</p><p>Up to 40% of patients consider switching providers after a breach. Patients withhold important health information when trust in provider confidentiality erodes. They delay seeking medical care. They provide inaccurate information to protect their privacy. They avoid participating in medical research or health information exchanges.</p><p>This is not sentiment. This is signal.</p><p>Trust friction shows up as missed appointments, disengagement, second-guessing, and refusal. These are measurable outcomes that precede clinical deterioration. Ignoring them because they do not appear on a SOC report is how institutions quietly rot.</p><p>The SIGNAL methodology exists precisely to surface this kind of friction. To instrument emotional safety the same way we instrument throughput. To treat fear, confusion, and loss of confidence as early warning indicators rather than collateral damage.</p><p>In the Trust Envelope Model, these trust failures map directly to violations of structural invariants. Availability failures violate Dignity (the patient cannot access the care they need). Integrity failures violate Accountability (the system cannot be relied upon to maintain accurate information). Opaque incident response violates Agency (patients cannot understand what happened to them or what actions to take).</p><p>In healthcare, emotional safety is not a luxury. It is a prerequisite for effective care.</p><h2><strong>Case Sketches: No Villains, Just Physics</strong></h2><p>An oncology department taken offline by ransomware does not need a villain. It needs to be acknowledged that availability is care-critical. When chemotherapy infusion systems fail, patients with time-sensitive cancer treatments face survival consequences. The triage decisions required are not technical. They are ethical.</p><p>An ER slowed by EHR latency does not need a scapegoat. It needs to be recognized that performance under load is a safety requirement. When waiting times increase 47.6% at neighboring hospitals absorbing displaced patients, people die in waiting rooms.</p><p>A medical device isolated so aggressively it breaks monitoring continuity does not need a memo. It needs design humility. Network segmentation that prevents clinicians from accessing diagnostic imaging or infusion pump data creates the exact conditions for medical error that security was supposed to prevent.</p><p>These failures are not moral lapses; they are systemic consequences of treating security as a shield rather than a circulatory system. Of optimizing for control instead of coherence.</p><p>In Trust Thermodynamics terms, these systems have settled into local energy minima that optimize for compliance theater rather than actual resilience. The lattice configuration prioritizes demonstrable controls over survivable architecture. The proof of lattice maintenance is absent. When stress arrives, the system has no capacity to maintain its structure.</p><h2><strong>What Changes When the CISO Accepts the Clinical Role</strong></h2><p>Everything.</p><p>Decision criteria change. Controls are evaluated not just for strength, but for survivability under stress. The question becomes: &#8220;Does this security measure maintain its protective function when the hospital is operating under ransomware conditions, when staff are exhausted, when emergency patients are arriving faster than they can be processed?&#8221;</p><p>Escalation paths change. Incidents are communicated as care disruptions, not technical inconveniences. When Change Healthcare paid a $22 million ransom and the affiliate holding the data refused to release it, claiming he had not received his share, that was not a technical failure. That was a patient-safety crisis affecting prescription processing at 80% of U.S. pharmacies.</p><p>Accountability loops close. Security leaders remain present through recovery, not just containment. They participate in morbidity and mortality conferences. They sit in command centers during ethical triage decisions. They hear what happened to the patients whose chemotherapy was delayed.</p><p>Most importantly, the CISO stops asking, &#8220;Is this secure?&#8221; and starts asking, &#8220;Is this safe?&#8221;</p><p>That shift does not weaken security. It strengthens it. Systems designed to preserve trust under pressure are harder to exploit, harder to fracture, and easier to repair. Coherence is not softness. It is resilience.</p><p>Trust Thermodynamics teaches us that energy must be continuously supplied to maintain non-equilibrium order. The CISO who accepts their clinical role becomes an active source of that energy. They instrument trust friction. They measure emotional safety. They design for graceful degradation. They own the clinical consequences.</p><p>This is not an aspirational culture change. This is operational rigor applied to human safety instead of financial loss.</p><h2><strong>The Provocation</strong></h2><p>If your security program cannot explain how it behaves at the worst moment of someone&#8217;s life, it is not protecting healthcare. It is protecting itself.</p><p><a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC9856685/">Neprash shows annual attacks more than doubled from 2016 to 2021</a>. From 2024 alone, 374 U.S. healthcare institutions were hit by ransomware, causing network shutdowns, offline systems, delays in critical medical procedures, and rescheduled appointments. The average cost of a healthcare data breach now exceeds $10.93 million.</p><p>But the real cost is measured in bodies. In cardiac arrests with no favorable neurological outcomes. In strokes that became permanent disability. In chemotherapy delayed past the point of treatment efficacy. In patients who stopped seeking care altogether.</p><p>Hospitals do not need more polished compliance artifacts. They need security leaders willing to own the clinical consequences of their decisions.</p><p>The CISO is already in the care pathway. The clinical chain already includes authentication latency, availability failures, integrity violations, and trust erosion. These are not abstractions. They are mechanisms of harm.</p><p>The only question is whether CISOs will act like patient-safety actors. Whether they will attend the morbidity and mortality conferences. Whether they will sit in the command center during ethical triage. Whether they will measure trust friction as rigorously as they measure patch compliance.</p><p>Whether they will accept that security failures kill patients.</p><p>The operational disruption is documented. The clinical harm is measurable. The only open question is whether leadership treats this as patient safety or as IT weather.</p><h2><strong>Next in the Series</strong></h2><p>Patient Outcomes Are Trust Outcomes: How Trust Value Management Operationalizes What Clinical Research Has Been Measuring for Decades</p><p><em>*this article is available as a downloadable PDF Slide Deck for paid subscribers.</em></p>
      <p>
          <a href="https://www.trustable.blog/p/the-ciso-as-patient-safety-actor">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[PART VI — THE DEPLOYMENT: How to Build the Trust Envelope in a Real Organization]]></title><description><![CDATA[The Trust Engineering Advantage]]></description><link>https://www.trustable.blog/p/part-vi-the-deployment-how-to-build</link><guid isPermaLink="false">https://www.trustable.blog/p/part-vi-the-deployment-how-to-build</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:46:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!l0Su!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e7c7458-6806-4623-965a-9c512274d9ce_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><h1>The Trust Engineering Advantage</h1><p>PART I&#8212;<a href="https://www.trustable.blog/p/part-i-the-gap">THE GAP: Everyone Has the Research, No One Has the Machinery</a></p><p>PART II&#8212;<a href="https://www.trustable.blog/p/part-ii-the-diagnosis-the-research">THE DIAGNOSIS: The Research Is Already Measuring TEM, Just Poorly</a></p><p>PART III&#8212;<a href="https://www.trustable.blog/p/part-iii-the-law-why-interventions">THE LAW: Why Interventions Fail Without Structure</a></p><p>PART IV&#8212;<a href="https://www.trustable.blog/p/part-iv-the-instrumentation-trust">THE INSTRUMENTATION: Trust Is Measurable, Predictable, and Designable</a></p><p>PART V&#8212;<a href="https://www.trustable.blog/p/part-v-the-capital-thesis-trust-is">THE CAPITAL THESIS: Trust Is an Asset Class, and TEM Is the Pricing Model</a></p><p>PART VI&#8212;<a href="https://www.trustable.blog/p/part-vi-the-deployment-how-to-build">THE DEPLOYMENT: How to Build the Trust Envelope in a Real Organization</a> </p></blockquote><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/part-vi-the-deployment-how-to-build?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/part-vi-the-deployment-how-to-build?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p><h1>PART VI: THE DEPLOYMENT</h1><h2>How to Build the Trust Envelope in a Real Organization</h2><p>Trust engineering is not therapy. It is not vibes. It is not values decks, listening tours, or DEI as ornamental signaling.</p><p><strong>Trust engineering is operational design.</strong></p><p>If Parts I&#8211;V established the architecture, physics, instrumentation, and capital logic of trust, Part VI answers the only question that matters to the VP Engineering who has a roadmap due Tuesday, the Chief Customer Officer whose NPS is cratering, or the CFO who just saw Q3 attrition numbers:</p><p><strong>W&#8230;</strong></p>
      <p>
          <a href="https://www.trustable.blog/p/part-vi-the-deployment-how-to-build">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[PART V — THE CAPITAL THESIS: Trust Is an Asset Class, and TEM Is the Pricing Model]]></title><description><![CDATA[The Trust Engineering Advantage]]></description><link>https://www.trustable.blog/p/part-v-the-capital-thesis-trust-is</link><guid isPermaLink="false">https://www.trustable.blog/p/part-v-the-capital-thesis-trust-is</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Fri, 12 Dec 2025 16:43:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ejAd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F002c7283-9e69-4641-976c-c3aa66d3ee12_846x858.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1><strong>The Trust Engineering Advantage</strong></h1><blockquote><p>PART I&#8212;<a href="https://www.trustable.blog/p/part-i-the-gap">THE GAP: Everyone Has the Research, No One Has the Machinery</a></p><p>PART II&#8212;<a href="https://www.trustable.blog/p/part-ii-the-diagnosis-the-research">THE DIAGNOSIS: The Research Is Already Measuring TEM (Badly)</a></p><p>PART III&#8212;<a href="https://www.trustable.blog/p/part-iii-the-law-why-interventions">THE LAW: Why Interventions Fail Without Structure</a></p><p>PART IV&#8212;<a href="https://www.trustable.blog/p/part-iv-the-instrumentation-trust">THE INSTRUMENTATION: Trust Is Measurable, Predictable, and Designable</a></p><p>PART V&#8212;<a href="https://www.trustable.blog/p/part-v-the-capital-thesis-trust-is">THE CAPITAL THESIS: Trust Is an Asset Class, and TEM Is the Pricing Model</a></p><p>PART VI&#8212;<a href="https://www.trustable.blog/p/part-vi-the-deployment-how-to-build">THE DEPLOYMENT: How to Build the Trust Envelope in a Real Organization</a></p></blockquote><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/part-v-the-capital-thesis-trust-is?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/part-v-the-capital-thesis-trust-is?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h1>PART V: THE CAPITAL THESIS</h1><h2>Trust Is an Asset Class, and TEM Is the Pricing Model</h2><p>There is a quiet truth in modern markets that nobody says out loud:</p><p><em><strong>The market already trades on trust.</strong> <strong>It just doesn&#8217;t have the vocabulary.</strong></em></p><p>It doesn&#8217;t call it trust. That would sound soft, immeasurable, unsuitable for institutional portfolios. So it invents euphemisms: execution quality, leadership premium, operational resilience, customer retention, risk-adjusted return, cost of capital, margin stability, human capital factor, workplace wellbeing index.</p><p>But &#8230;</p>
      <p>
          <a href="https://www.trustable.blog/p/part-v-the-capital-thesis-trust-is">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[PART IV — THE INSTRUMENTATION: Trust Is Measurable, Predictable, and Designable]]></title><description><![CDATA[The Trust Engineering Advantage]]></description><link>https://www.trustable.blog/p/part-iv-the-instrumentation-trust</link><guid isPermaLink="false">https://www.trustable.blog/p/part-iv-the-instrumentation-trust</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Wed, 10 Dec 2025 13:42:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wcLa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9576cc78-3e51-4840-b654-b2e8d70b1f43_1024x572.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><h1>The Trust Engineering Advantage</h1><p>PART I&#8212;<a href="https://www.trustable.blog/p/part-i-the-gap">THE GAP: Everyone Has the Research, No One Has the Machinery</a></p><p>PART II&#8212;<a href="https://www.trustable.blog/p/part-ii-the-diagnosis-the-research">THE DIAGNOSIS: The Research Is Already Measuring TEM, Just Poorly</a></p><p>PART III&#8212;<a href="https://www.trustable.blog/p/part-iii-the-law-why-interventions">THE LAW: Why Interventions Fail Without Structure</a></p><p>PART IV&#8212;<a href="https://www.trustable.blog/p/part-iv-the-instrumentation-trust">THE INSTRUMENTATION: Trust Is Measurable, Predictable, and Designable</a></p><p>PART V&#8212;<a href="https://www.trustable.blog/p/part-v-the-capital-thesis-trust-is">THE CAPITAL THESIS: Trust Is an Asset Class, and TEM Is the Pricing Model</a></p><p>PART VI&#8212;<a href="https://www.trustable.blog/p/part-vi-the-deployment-how-to-build">THE DEPLOYMENT: How to Build the Trust Envelope in a Real Organization</a> </p></blockquote><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/part-iv-the-instrumentation-trust?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/part-iv-the-instrumentation-trust?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h1>PART IV: THE INSTRUMENTATION</h1><h2>Trust Is Measurable, Predictable, and Designable</h2><p>If the first three parts established the architecture and the physics, Part IV delivers what every executive secretly wants but will never say out loud:</p><p><strong>A dashboard.</strong></p><p>Not a vibes dashboard. Not an HR &#8220;engagement pulse&#8221; that measures sentiment three months after the damage is done. Not another survey that asks &#8220;On a scale of 1-10, how happy are you?&#8221; and generates data that goes directly into a deck that goes directly into a drawer.</p><p>A telemetry system that me&#8230;</p>
      <p>
          <a href="https://www.trustable.blog/p/part-iv-the-instrumentation-trust">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[PART III — THE LAW: Why Interventions Fail Without Structure]]></title><description><![CDATA[Why every corporate &#8220;happiness fix&#8221; fails: the Law of Friction and Meaning explains why removing resistance destroys trust, and why only engineered friction creates it.]]></description><link>https://www.trustable.blog/p/part-iii-the-law-why-interventions</link><guid isPermaLink="false">https://www.trustable.blog/p/part-iii-the-law-why-interventions</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Mon, 08 Dec 2025 12:41:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!OxVI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F130b425e-8d3a-4b57-aabd-b8de49bbb955_1024x572.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><h1>The Trust Engineering Advantage</h1><p>PART I&#8212;<a href="https://www.trustable.blog/p/part-i-the-gap">THE GAP: Everyone Has the Research, No One Has the Machinery</a></p><p>PART II&#8212;<a href="https://www.trustable.blog/p/part-ii-the-diagnosis-the-research">THE DIAGNOSIS: The Research Is Already Measuring TEM, Just Poorly</a></p><p>PART III&#8212;<a href="https://www.trustable.blog/p/part-iii-the-law-why-interventions">THE LAW: Why Interventions Fail Without Structure</a></p><p>PART IV&#8212;<a href="https://www.trustable.blog/p/part-iv-the-instrumentation-trust">THE INSTRUMENTATION: Trust Is Measurable, Predictable, and Designable</a></p><p>PART V&#8212;<a href="https://www.trustable.blog/p/part-v-the-capital-thesis-trust-is">THE CAPITAL THESIS: Trust Is an Asset Class, and TEM Is the Pricing Model</a></p><p>PART VI&#8212;<a href="https://www.trustable.blog/p/part-vi-the-deployment-how-to-build">THE DEPLOYMENT: How to Build the Trust Envelope in a Real Organization</a> </p></blockquote><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/part-iii-the-law-why-interventions?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/part-iii-the-law-why-interventions?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p><h1>PART III: THE LAW</h1><h2>Why Interventions Fail Without Structure</h2><p>Here is the question that breaks every corporate happiness initiative:</p><p><em><strong>If autonomy, fairness, safety, cooperation, and learning all predict performance&#8212;and we have twenty years of research proving it&#8212;why does every intervention fail?</strong></em></p><p>Not struggle. Not disappoint. <em>FAIL.</em></p><p>Ping pong tables don&#8217;t increase satisfaction. They increase cynicism. Unlimited PTO often reduces time off rather than expanding it. Open offices designed for collaboration destroy the conditions that enable it&#8230;</p>
      <p>
          <a href="https://www.trustable.blog/p/part-iii-the-law-why-interventions">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[PART II — THE DIAGNOSIS: The Research Is Already Measuring TEM, Just Poorly]]></title><description><![CDATA[The Trust Engineering Advantage]]></description><link>https://www.trustable.blog/p/part-ii-the-diagnosis-the-research</link><guid isPermaLink="false">https://www.trustable.blog/p/part-ii-the-diagnosis-the-research</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Fri, 05 Dec 2025 12:39:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!HEaI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f768b10-0737-4916-a5ac-d67c8f572526_1024x572.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><h1>The Trust Engineering Advantage</h1><p>PART I&#8212;<a href="https://www.trustable.blog/p/part-i-the-gap">THE GAP: Everyone Has the Research, No One Has the Machinery</a></p><p>PART II&#8212;<a href="https://www.trustable.blog/p/part-ii-the-diagnosis-the-research">THE DIAGNOSIS: The Research Is Already Measuring TEM, Just Poorly</a></p><p>PART III&#8212;<a href="https://www.trustable.blog/p/part-iii-the-law-why-interventions">THE LAW: Why Interventions Fail Without Structure</a></p><p>PART IV&#8212;<a href="https://www.trustable.blog/p/part-iv-the-instrumentation-trust">THE INSTRUMENTATION: Trust Is Measurable, Predictable, and Designable</a></p><p>PART V&#8212;<a href="https://www.trustable.blog/p/part-v-the-capital-thesis-trust-is">THE CAPITAL THESIS: Trust Is an Asset Class, and TEM Is the Pricing Model</a></p><p>PART VI&#8212;<a href="https://www.trustable.blog/p/part-vi-the-deployment-how-to-build">THE DEPLOYMENT: How to Build the Trust Envelope in a Real Organization</a> </p></blockquote><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/part-ii-the-diagnosis-the-research?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/part-ii-the-diagnosis-the-research?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h1>PART II: THE DIAGNOSIS</h1><h2>The Research Is Already Measuring TEM (Badly)</h2><p>If Part I exposed the canyon between what we know about human thriving and what our systems actually produce, Part II reveals the punchline hiding in plain sight:</p><p>The entire well-being canon has been measuring the Trust Envelope for two decades. They just didn&#8217;t know the name of the machine they were touching.</p><p>Positive psychology, organizational justice, Self-Determination Theory, prosocial behavior research, and psychological safety&#8212;all the grand theories presente&#8230;</p>
      <p>
          <a href="https://www.trustable.blog/p/part-ii-the-diagnosis-the-research">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[PART I — THE GAP: Everyone Has the Research, No One Has the Machinery]]></title><description><![CDATA[Everyone knows what makes humans thrive. Almost no system is built to deliver it. The Trust Envelope exposes the gap between research we admire and realities we engineer.]]></description><link>https://www.trustable.blog/p/part-i-the-gap</link><guid isPermaLink="false">https://www.trustable.blog/p/part-i-the-gap</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Wed, 03 Dec 2025 12:18:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!nEbJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70793d7f-6058-46a6-972c-40dd955997d7_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><h1>The Trust Engineering Advantage</h1><p>PART I&#8212;<a href="https://www.trustable.blog/p/part-i-the-gap">THE GAP: Everyone Has the Research, No One Has the Machinery</a></p><p>PART II&#8212;<a href="https://www.trustable.blog/p/part-ii-the-diagnosis-the-research">THE DIAGNOSIS: The Research Is Already Measuring TEM (Badly)</a></p><p>PART III&#8212;<a href="https://www.trustable.blog/p/part-iii-the-law-why-interventions">THE LAW: Why Interventions Fail Without Structure</a></p><p>PART IV&#8212;<a href="https://www.trustable.blog/p/part-iv-the-instrumentation-trust">THE INSTRUMENTATION: Trust Is Measurable, Predictable, and Designable</a></p><p>PART V&#8212;<a href="https://www.trustable.blog/p/part-v-the-capital-thesis-trust-is">THE CAPITAL THESIS: Trust Is an Asset Class, and TEM Is the Pricing Model</a></p><p>PART VI&#8212;<a href="https://www.trustable.blog/p/part-vi-the-deployment-how-to-build">THE DEPLOYMENT: How to Build the Trust Envelope in a Real Organization</a> </p></blockquote><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/part-i-the-gap?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/part-i-the-gap?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h1>PART I: THE GAP</h1><h2>Everyone Has the Research. No One Has the Machinery.</h2><p>Walk into any corporate keynote ballroom today, and you will hear the same soft-focus sermon: happy employees perform better. People need dignity and autonomy. Social support matters. Fairness matters. Safety matters. When humans feel respected and connected, the whole machine hums.</p><p>And here is the inconvenient thing: The research actually agrees with them. In fact, the research has been agreeing with them for more than twenty years.</p><p>Shawn Achor&#8217;s synthesis of a decade &#8230;</p>
      <p>
          <a href="https://www.trustable.blog/p/part-i-the-gap">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[“Government Shouldn’t Pick Winners”—But They Must Pick Boundaries]]></title><description><![CDATA[When Cognitive Infrastructure Becomes National Infrastructure]]></description><link>https://www.trustable.blog/p/government-shouldnt-pick-winnersbut</link><guid isPermaLink="false">https://www.trustable.blog/p/government-shouldnt-pick-winnersbut</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Wed, 26 Nov 2025 12:21:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!hO9w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8a9570a-9ccc-4335-b805-6098daed3e90_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/government-shouldnt-pick-winnersbut?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/government-shouldnt-pick-winnersbut?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h3>The Clarity After the Controversy</h3><p>In November 2025, OpenAI&#8217;s CFO Sarah Friar sparked a firestorm when she suggested at a Wall Street Journal event that the U.S. government should provide &#8220;backstops&#8221;&#8212;loan guarantees&#8212;for the company&#8217;s trillion-dollar infrastructure buildout. The backlash was swift and severe, with David Sacks, the White House AI czar, declaring flatly: &#8220;There will be no federal bailout for AI.&#8221;</p><p>Sam Altman moved quickly to clarify: &#8220;We do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions.&#8221;</p><p>On this specific point&#8212;that taxpayers shouldn&#8217;t underwrite private company failures, Altman is absolutely right. Markets need failure. Creative destruction is capitalism&#8217;s immune system, and weakening it creates moral hazard at scale. The 2008 financial crisis proved this with brutal clarity: when private profits are privatized but l&#8230;</p>
      <p>
          <a href="https://www.trustable.blog/p/government-shouldnt-pick-winnersbut">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Déjà Vu in the Cloud: The Drift → Gainsight → Salesforce Breach Is the Canary in the Identity Coal Mine]]></title><description><![CDATA[OAuth token theft is exposing a broken SaaS trust architecture. Over-scoped, untracked integrations let attackers move laterally at scale. This is trust debt coming due.]]></description><link>https://www.trustable.blog/p/deja-vu-in-the-cloud-the-drift-gainsight</link><guid isPermaLink="false">https://www.trustable.blog/p/deja-vu-in-the-cloud-the-drift-gainsight</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Tue, 25 Nov 2025 20:09:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!O9kE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a11bfad-cb62-4ec1-9287-7d584f8377f4_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p>There&#8217;s a particular kind of d&#233;j&#224; vu that appears when you watch the same failure repeat, not because people didn&#8217;t know better, but because the system itself was built to fail this way. That&#8217;s where we are with the news that Salesforce customers have been breached again, this time through Gainsight, a customer success platform that, like Drift before it, enjoys a deep and poorly governed integration into the Salesforce ecosystem.</p><p>If you haven&#8217;t been following the pattern, here it is in clean, brutal lines:</p><p>No one hacked Salesforce.<br>They hacked the trust relationships we built around Salesforce.<br>And those relationships are the real attack surface now.</p><p>This is not an &#8220;app breach story.&#8221; This is a story of trust collapse in enterprise SaaS architecture. And it&#8217;s only going to get worse, not because the technology is fundamentally broken, but because we&#8217;re treating trust as an automatic byproduct of vendor selection instead of something we deliberately manufacture and maintain.</p><h2>I. The Attack T&#8230;</h2>
      <p>
          <a href="https://www.trustable.blog/p/deja-vu-in-the-cloud-the-drift-gainsight">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The Trust Mirage: Why Safety Theater Is More Dangerous Than No Safety At All]]></title><description><![CDATA[How Tech Giants Weaponize Governance Performance to Avoid Actual Accountability]]></description><link>https://www.trustable.blog/p/the-trust-mirage-why-safety-theater</link><guid isPermaLink="false">https://www.trustable.blog/p/the-trust-mirage-why-safety-theater</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Tue, 25 Nov 2025 12:47:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!BPUm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6d70305-234e-47ff-a4e0-1f8cec0881d5_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/the-trust-mirage-why-safety-theater?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/the-trust-mirage-why-safety-theater?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h1></h1><h1><strong>How Tech Giants Weaponize Governance Performance to Avoid Actual Accountability</strong></h1><p>There is no industry more fluent in the aesthetics of responsibility than the one automating consequential decisions about human lives. Tech giants have perfected a particular form of corporate theater: the simulation of trustworthiness so convincing that most people forget to demand proof. They build beautiful &#8220;Responsible AI&#8221; pages. They publish model cards with impressive taxonomies. They convene ethics boards that disband when inconvenient. They give keynotes about the importance of safety, delivered by executives who will never face consequences when their systems fail.</p><p>This is Safety Theater, governance as performance art, designed to look like accountability from a distance while functioning as a liability shield up close.</p><p>And it works because of something deeper and more troubling: most people no longer remember what real trust feels like. When every app surveils you, every platform manipulates you, e&#8230;</p>
      <p>
          <a href="https://www.trustable.blog/p/the-trust-mirage-why-safety-theater">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[AI Governance Is Trust Engineering, Not Compliance Theater]]></title><description><![CDATA[What Happens When Models Own the Narrative and We Lose the Challenge Function]]></description><link>https://www.trustable.blog/p/ai-governance-is-trust-engineering</link><guid isPermaLink="false">https://www.trustable.blog/p/ai-governance-is-trust-engineering</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Mon, 24 Nov 2025 13:04:10 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!gYit!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3d96d7f-dc7d-461d-a386-535aa6648113_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/ai-governance-is-trust-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/ai-governance-is-trust-engineering?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h1><strong>Why the Future of Intelligence Depends on the Architecture of Accountability</strong></h1><p>There&#8217;s a particular species of corporate delusion that flourishes in moments of technological vertigo: the belief that governance can be purchased as an add-on, implemented as a checklist, and certified as complete. We&#8217;ve seen it before, in financial services post-2008, in social media circa 2016, in every industry that discovered consequences arrive faster than controls. Now we&#8217;re watching it unfold again in AI, where the stakes are not merely operational but epistemic: the power to define what is true, who is believed, and whose reality counts.</p><p>The instinct is predictable. When risk becomes uncomfortable, organizations build a process around it. Process becomes documentation. Documentation becomes a shield. And eventually, the shield becomes costume jewelry, governance as performance art. Recognizable from a distance, nonsensical up close.</p><p>But AI systems don&#8217;t read your responsible AI policy deck. Models don&#8217;&#8230;</p>
      <p>
          <a href="https://www.trustable.blog/p/ai-governance-is-trust-engineering">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The Age of Invisibility: How Algorithms Erase Older Women and Reshape Trust]]></title><description><![CDATA[What the Nature study on gendered representation reveals about trust friction, market efficiency, and the economics of bias.]]></description><link>https://www.trustable.blog/p/the-age-of-invisibility-how-algorithms</link><guid isPermaLink="false">https://www.trustable.blog/p/the-age-of-invisibility-how-algorithms</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Mon, 10 Nov 2025 13:03:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5pca!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57137a91-1470-41f1-8de7-7496216e0a5c_1024x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/the-age-of-invisibility-how-algorithms?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/the-age-of-invisibility-how-algorithms?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>SOURCE: <a href="https://www.nature.com/articles/s41586-025-09581-z">Age and gender distortion in online media and large language models</a></p><h2>When Bias Becomes Infrastructure</h2><p>Some biases announce themselves with the brutality of slurs and the sting of slights. Others work differently; patient, systematic, encoded in pixels, and buried in training data. They don&#8217;t shout. They accumulate. They normalize through repetition until distortion becomes indistinguishable from reality.</p><p>The recent Nature study on age and gender distortion in online media and large language models exposes one of these quieter violences: a culture-wide algorithmic pattern that literally edits older women out of public reality. Across 1.4 million images and videos, women are portrayed as significantly younger than men in identical roles. The distortion intensifies precisely where it matters most: in high-status professions where women&#8217;s authority should be most visible. CEOs, doctors, professors: reality shows no age difference between men and women in these roles. The internet insi&#8230;</p>
      <p>
          <a href="https://www.trustable.blog/p/the-age-of-invisibility-how-algorithms">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The Catastrophe That Trust Could Have Prevented]]></title><description><![CDATA[A ransomware attack didn&#8217;t just expose Ascension&#8217;s systems; it revealed how executives prioritized efficiency over trust, and patients paid the price.]]></description><link>https://www.trustable.blog/p/the-catastrophe-that-trust-could</link><guid isPermaLink="false">https://www.trustable.blog/p/the-catastrophe-that-trust-could</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Wed, 05 Nov 2025 13:10:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Y6hJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67b435d4-0d8d-4235-8876-41a1c7ae7062_1008x1002.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/the-catastrophe-that-trust-could?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/the-catastrophe-that-trust-could?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h1></h1><h2>How Ascension Failed to Manufacture Trust (and How Patients Pay the Price)</h2><h3>To the Reader</h3><p>This piece responds to Ars Technica's "<a href="https://arstechnica.com/security/2025/09/how-weak-passwords-and-other-failings-led-to-catastrophic-breach-of-ascension/">How Weak Passwords and Other Failings Led to the Catastrophic Breach of Ascension.</a>" That story frames the breach as a technical lapse, a narrative of weak passwords, legacy protocols, and misconfigured systems that reads like a cybersecurity postmortem from any of the last two decades. Here we reframe it: breaches are never merely technical accidents waiting to happen. They are manufactured through deliberate management strategy, carefully constructed liability shields, and systemic incentives that trade away trust for short-term efficiency. What follows is an examination of how trust should have been manufactured at Ascension, why it wasn't, and why these failures continue to recur across industries with clockwork predictability.</p><p>The technical details of the Ascension breach, the contractor's laptop, the malicious link, and the Kerberoasting attack are symptoms&#8230;</p>
      <p>
          <a href="https://www.trustable.blog/p/the-catastrophe-that-trust-could">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The Copyrighted Face: Denmark, Deepfakes, and the Struggle for Data Sovereignty]]></title><description><![CDATA[When your face becomes intellectual property, the battle for data sovereignty stops being abstract, it becomes personal, visible, and for sale.]]></description><link>https://www.trustable.blog/p/denmarks-copyright-face-experiment</link><guid isPermaLink="false">https://www.trustable.blog/p/denmarks-copyright-face-experiment</guid><dc:creator><![CDATA[Rachel Maron]]></dc:creator><pubDate>Mon, 03 Nov 2025 20:39:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!m8OV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12034e6a-b950-484c-ae33-eb159eb8e1bd_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.trustable.blog/p/denmarks-copyright-face-experiment?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.trustable.blog/p/denmarks-copyright-face-experiment?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>SOURCE: <strong><a href="https://www.techpolicy.press/denmark-leads-eu-push-to-copyright-faces-in-fight-against-deepfakes/">Denmark Leads EU Push to Copyright Faces in Fight Against Deepfakes</a></strong></p><h2>I. The Ghost in the Portrait: A Historical Prelude</h2><p>In 1890, Samuel Warren and Louis Brandeis penned what would become the most influential law review article in American jurisprudence: &#8220;<a href="https://groups.csail.mit.edu/mac/classes/6.805/articles/privacy/Privacy_brand_warr2.html">The Right to Privacy</a>.&#8221; Their motivation was visceral and immediate. Warren&#8217;s wife had been subjected to invasive society page coverage; her private moments had been rendered public spectacle by the emerging technologies of photography and tabloid journalism. The solution they proposed was revolutionary: a legal right &#8220;to be let alone,&#8221; a barrier between the intimate self and the consuming gaze of publicity.</p><p>What made their argument radical was not merely its recognition of privacy as a right, but its acknowledgment that technology had fundamentally altered the nature of exposure. The photograph, they understood, was not simply a recording device; it was a weapon of replication, capable of divorcing image from context, presence fro&#8230;</p>
      <p>
          <a href="https://www.trustable.blog/p/denmarks-copyright-face-experiment">
              Read more
          </a>
      </p>
   ]]></content:encoded></item></channel></rss>