Part IV: AI You Can Argue With
When AI makes a medical decision, how do you argue back? Transparency, explainability, and trust aren’t optional, they’re the future of ethical care.
Why Transparency and Trust Still Matter, Even with a Machine
Trust is fragile in healthcare, especially if you’ve been misheard, misdiagnosed, or dismissed.
And yet, we’re building clinical AI systems that don’t explain themselves.
That can’t be questioned.
That don’t even know when they’re wrong.
These so-called “black box” models can detect patterns in massive data sets that humans might never notice but their reasoning is invisible, even to the people who built them. And when the consequences of those invisible decisions are life-or-death, a lack of transparency becomes a moral failure.
Because here’s the reality:
You can’t consent to care if you don’t understand it.
You can’t trust a system you can’t argue with.
What Happens When AI Is Wrong?
In 2019, a widely used hospital risk algorithm was found to underestimate the health needs of Black patients by more than half. The model used healthcare spending as a proxy for health risk, assuming that more money spent meant more severe illness.
However, because Black patients historically receive less care than white patients, their risk scores were artificially deflated.
The result?
Millions of Black patients were flagged as “lower risk” and systematically excluded from preventative programs.
This wasn’t malicious. It wasn’t intentional.
But it was opaque and, therefore, unchallengeable.
And that’s the danger.
Who notices when an AI model makes a mistake in a clinical setting?
When a patient says, “That doesn’t feel right,” who listens?
If a machine can’t explain its logic, it can’t be corrected.
If no one knows how a decision was made, no one can meaningfully appeal it.
And if the bias is buried deep in a billion rows of training data, it may never even be detected.
The Black Box Problem
Most advanced AI models, especially deep learning systems, are functionally unexplainable.
We know what goes in (symptoms, vitals, lab results).
We know what comes out (a diagnosis, a risk score, a treatment suggestion).
But we don’t really know how the system connects A to B.
That’s a problem in any domain. But in healthcare, it’s catastrophic.
Historically, we’ve required physicians to explain their reasoning for patient safety and legal accountability. We expect providers to document why they chose one diagnosis over another, why a medication was prescribed, or why a patient was denied care.
AI bypasses that. And in doing so, it risks reintroducing a power dynamic that medicine has spent decades trying to dismantle: one where the system says “trust me,” but won’t tell you why.
Enter Explainability
Explainable AI (XAI) is a growing field devoted to solving this very issue. The goal is simple but profound: make algorithms interpretable to the people they affect.
Can a patient understand why they were denied a test?
Can a doctor understand how the model reached its diagnosis?
Can an auditor trace where bias entered the system?
Tools like SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) offer frameworks for reverse-engineering model decisions, helping users see which inputs drove the outcome. However, these tools are still largely used by data scientists, not by frontline clinicians or patients.
To be meaningful in healthcare, explainability has to be embedded at every level:
Not just technical but cultural. Not just internally but also in a patient-facing manner.
Patients need to see more than a risk score. They need to know what that score is based on. They need the right to say, “That doesn’t describe me,” and have someone listen.
Because even the smartest model is still just a tool. It’s not an oracle.
The Case for Human+AI, Not AI Alone
There’s a growing movement in medicine that argues against “AI replacement” and toward AI augmentation, using technology to support, not supplant, human clinicians.
This isn’t just practical. It’s ethical.
Human-plus-AI models consistently outperform either alone. A 2020 study published in Nature found that a deep learning model for breast cancer screening performed well but best when paired with collaborating radiologists. The algorithm reduced false negatives, and the humans caught edge cases.
Together, they were better.
However, that partnership only works if humans understand what AI is doing. And that means we can’t treat explainability as optional. It’s foundational.
If we want trust in AI, we have to design for it.
That means:
Transparent decision pathways
Auditable logs
Interfaces patients can understand
Built-in mechanisms for dispute and correction
Regulatory standards that require explainability, not just accuracy
A Short History of Medical Authority
For most of modern medical history, the doctor’s word was final. Patients weren’t encouraged to ask questions. The system was paternalistic, especially toward women, people of color, and disabled individuals.
But that model has been shifting in the past 50 years. Informed consent, shared decision-making, and patient-centered care have slowly rebalanced the scales.
Now, with AI entering the room, we risk snapping back.
We’re reintroducing an authority that can’t be reasoned with.
We’re building systems that say “because I said so.”
And we’re doing it in the name of “progress.”
That’s not innovation. That’s regression with an upgrade.
The Right to Be Heard
We need healthcare systems, human and machine alike, that people can resist, that respect lived experience, and that allow for context.
Because medicine isn’t just about data; it’s about meaning.
And meaning is something only humans can fully interpret.
So yes, build the machine.
But build it with humility.
Build it so patients can argue with it.
And build it so when they do, it matters.
Coming Next:
Part V: Beyond the Algorithm – What Needs to Change in Medicine for AI Actually to Help
We’ll step back and examine what systemic and cultural shifts are necessary outside the code to make AI in healthcare truly transformative.
Back to the Beginning: Series Overview: “Trust, Bias, and the Algorithm: Rethinking AI in Women’s Healthcare”


