Part III: Can We Build a Better Machine? Designing AI That Sees What Doctors Don’t or Won’t
Healthcare AI can amplify bias or dismantle it. The difference? Whether we design for equity, not just accuracy. The future of care depends on it.
Part III: Can We Build a Better Machine?
Designing AI That Sees What Doctors Don’t—or Won’t
By now, the promise and peril of AI in healthcare should feel familiar. We’ve seen how training algorithms on biased data can reinforce the inequities that medicine should fix. We’ve looked at the slow-motion disaster of diagnostic delay for women and how those delays are magnified when machines inherit the same blind spots as their human predecessors.
But this isn’t a dead end.
It’s a fork in the road.
Because for all the (justified) fear about algorithmic bias, a new cohort of researchers, data scientists, and public health advocates are asking a better question:
What if we built AI not to replicate medicine as it is, but to imagine what it could be?
Rewriting the Model — Literally
Bias in AI isn’t inevitable. It’s not a ghost in the machine. It’s a byproduct of choices, what data we include, which outcomes we optimize for, and how we define “success.”
And that means it’s fixable.
Researchers are developing models across academic and clinical settings that center on fairness, intersectionality, and accountability. These models don’t just seek average accuracy; they evaluate performance across subgroups: by gender, race, age, and socioeconomic status.
Take the work of Dr. Marzyeh Ghassemi at MIT, who specializes in building machine learning models that are both clinically useful and ethically grounded. Her team’s research focuses on how AI can perpetuate and detect inequities in healthcare data. In one study, they found that most models perform worse for women, Black patients, and people with complex chronic conditions. Not surprisingly, those are often the same groups that are underserved by the current system.
Then there’s Dr. Inioluwa Deborah Raji, whose groundbreaking audits of commercial AI systems exposed racial bias in facial recognition. While not health-specific, her work has shaped the broader movement toward algorithmic auditing, which healthcare practices desperately need to adopt. After all, if we can audit hospitals, we can audit models.
And over at Stanford, Dr. Fatima Rodriguez has done landmark work showing how cardiovascular risk prediction tools routinely underestimate risk in women and Black patients, directly impacting who receives preventive care.
All of this points to a truth that Silicon Valley often skips:
Good data science isn’t just about prediction. It’s about who gets protected.
Efficiency vs. Equity
Here’s the uncomfortable part: Many developers still prioritize efficiency over equity and call it “objectivity.”
The thinking goes like this: if a model performs well on aggregate metrics, it's a success. If it saves doctors time, it’s progress. If it reduces costs, it’s ready for deployment.
But here’s the problem: health outcomes are not averages. You can’t safely ignore edge cases when the edge is half the population.
So we have to ask:
Are you optimizing for the fastest answer or the fairest one?
Are you designing tools that spot dominant patterns or tools that uplift the overlooked?
Those are two very different machines.
Designing for the Disbelieved
To truly transform healthcare, AI must be trained to see what doctors miss, not just what they already expect.
That means:
Including data from marginalized populations, even if it’s harder to collect
Annotating for context: pain described by a woman might look different in clinical notes than for a man
Balancing data to correct for historical exclusions (e.g., adding data from women with atypical heart attack symptoms)
Auditing outputs with real patient advocates, not just developers in a lab
Building explainable AI so patients can challenge flawed logic
Using counterfactual reasoning: what would this model decide if this were a white man?
This isn’t science fiction. It’s happening. But slowly. This is only where ethics is built into the process from the beginning, not bolted on at the end.
Why This Matters
Because here’s what’s at stake:
AI is already being used in diagnostics, risk scoring, clinical decision support, and triage.
If we don’t correct for bias now, we are locking generations of health injustice into silicon and code this time without the bedside manner to fight back.
But if we do this right?
We could finally see patients who have long been rendered invisible.
We could flag symptoms doctors have learned to overlook.
We could reimagine care itself - not just as faster, but as fairer.
And that’s the kind of machine worth building.
Next in the Series:
“AI You Can Argue With: Why Transparency and Trust Still Matter”
We’ll examine explainability, auditability, and patients' right to question—or even reject—AI-generated decisions in clinical care.
Back to the Beginning: Series Overview: “Trust, Bias, and the Algorithm: Rethinking AI in Women’s Healthcare”


