2 Comments
User's avatar
Kaylyn Waycaster's avatar

I like this piece a lot! The line re: "compliance as architecture" stuck with me. I'm curious - how do you see renewable proofs working in everyday user apps? I'm playing with this stuff in SideXSide, my women's safety app built on peer-trust without any tracking. Would love your take! - Kaylyn Waycaster (WBIA group :) )

Expand full comment
Rachel Maron's avatar

Love this question, Kaylyn. It hits right at the intersection of ethics and design.

When I talk about “compliance as architecture," I mean shifting from rules users agree to, toward systems that prove integrity as they operate. In a safety app like SideXSide, renewable proofs don't have to be cryptographic or heavy; they can be relational and temporal.

For example:

-- Session proofs: each help signal or check-in produces a short-lived, verifiable event record (who initiated, who received, when it expired).

-- Peer proofs: trust ratings or acknowledgments that renew after every verified interaction, rather than persisting indefinitely. (Internally, we've been talking about how to use a blockchain token for this, but we keep stumbling on some of the potential global impacts of a Trustable Token; we don't want to create some weird/icky version of a social credit score.)

-- Boundary proofs: app-level evidence that no location data was stored or transmitted, renewed automatically every X hours, and visible to users.

The goal isn't surveillance; it's witnessable safety. Proof becomes the quiet architecture of consent.

Would love to chat more about how you're handling peer verification loops; it sounds beautifully aligned.

P.S. My Bumble stories are the stuff of legend; you can't know how excited I am to see where you're going with your app.

Expand full comment