Part II: Garbage In, Garbage Out: How AI Learns from the Broken Systems We're Trying to Fix
Why biased healthcare data leads to dangerous AI and what it means for women, pain, and misdiagnosis.
Part II: Garbage In, Garbage Out
How AI Learns from the Broken Systems We’re Trying to Fix
There’s an old computer science saying, short, snarky, and devastatingly accurate:
Garbage in, garbage out.
In artificial intelligence, it means that no matter how sophisticated your model is, if the data you feed it is flawed, the output will also be flawed. But in healthcare, the implications are more than theoretical. When the data is biased, the consequences are deadly.
We often talk about AI as if it’s objective, as if it represents a clean break from the messy, human tendencies that have shaped medicine's long history of inequity. But in reality, most AI systems in healthcare are trained on electronic health records (EHRs), clinical trial data, insurance claims, and physician notes. And those are anything but neutral.
These data sources reflect decades—centuries—of unequal treatment. They’re built on systems that have consistently misdiagnosed women, ignored pain in Black patients, erased trans people, and filtered symptoms through a lens of cultural assumptions and institutional neglect. When we feed this data into algorithms, we aren’t correcting for those injustices; we’re encoding them.
Bias by Design
Let’s look at just a few examples of how deeply these biases run:
Pain Management Disparities: A 2016 study found that half of medical students believed at least one myth about Black patients, such as the idea that Black people have thicker skin or feel less pain. Black patients are 22% less likely than white patients to receive pain medication in emergency rooms.
Autism Diagnosis Gaps: For every girl diagnosed with autism, four boys receive a diagnosis. Yet research increasingly shows that many girls are missed entirely due to gendered expectations around social behavior. In other words, we built our diagnostic criteria around boys and then called girls “atypical.”
Heart Attack Symptoms: Women are significantly more likely than men to be misdiagnosed during a heart attack, often because their symptoms present differently, with more nausea, fatigue, and jaw pain. Yet the clinical model is still the "male heart attack." A 2018 study found that women under 55 were seven times more likely than men to be misdiagnosed in the ER.
Now, imagine an AI model trained on all this.
What will it learn?
That Black patients don’t need opioids.
That women’s symptoms are likely anxiety.
That autism is a “boy” disorder.
That heart attacks look a certain way—a male way.
This isn’t the algorithm’s fault. It’s doing what it was trained to do.
But that’s the point.
The Historical Baggage of "Medical Truth"
The problem isn’t just the data. It’s the worldview behind the data.
For most of history, medicine didn’t just exclude women, it actively built itself around the male body as the default.
As Cat Bohannon explains in Eve, women were not “forgotten” in medical science. They were deliberately excluded. Unless the chapter is about reproduction, the average anatomy textbook still features male bodies as default illustrations. The NIH didn’t require women to be included in clinical trials until 1993. That’s less than one generation ago.
So when we ask AI to find patterns in medical data, we’re asking it to learn from an archive that has consistently:
Misinterpreted female biology
Pathologized female emotion
Treated women’s symptoms as less urgent, less real, and less worthy of intervention
This means the algorithm doesn’t just miss important signals; it learns to miss them.
Data Doesn’t Fix Itself
One of the most dangerous assumptions we can make about AI is that it will “even things out.” That large enough datasets will smooth over bias. That scale leads to fairness.
But in truth, bias scales, too.
A biased physician can only mistreat so many people in a day. A biased algorithm can misclassify tens of thousands of cases in minutes, quietly, invisibly, and with a false sense of precision. And unlike a bad doctor, you can’t argue with an algorithm’s bedside manner.
We don’t always see it happening. AI doesn’t announce when it’s wrong. It doesn’t flag when it was trained on biased notes. It just is, quiet, fast, and confident.
And if the bias isn’t apparent to the people who build and deploy these systems because it wasn’t evident to the doctors who wrote the notes or the researchers who published the trials, then the systems will inherit all that garbage and spit it back out as science.
The Human Cost of Misclassification
What does this look like in real life?
It looks like a woman with undiagnosed endometriosis being told her pain is “stress.”
It looks like a young girl masking her autistic traits so well that her suffering never makes it into the record.
It looks like a Black woman in an ER being offered a sedative when she needs a cardiologist.
It looks like every patient who didn’t “fit the pattern” because the pattern was built on someone else’s body.
It’s not hypothetical. It’s happening. And if we’re not ruthlessly intentional about the data we use and the assumptions we make, AI won’t solve it. It will calcify it.
Toward Better Systems
This doesn’t mean we give up on AI in healthcare; we are far from it. It means we stop treating algorithms like magic and start treating them like mirrors.
It means:
Curating training datasets to be inclusive, intersectional, and historically aware
Auditing outputs for demographic bias
Incorporating social determinants of health into model design
Elevating patient voice, especially from historically marginalized communities
Creating transparent systems, patients can question
And most importantly, it means acknowledging that technology cannot solve what systems refuse to face. AI isn’t a shortcut around structural injustice. But it can be part of a toolkit to expose and repair it, if we have the will.
Because in healthcare, data is power.
And we can’t afford to keep feeding our future with our past mistakes.
Next Up: Part III: Can We Build a Better Machine? Designing AI That Sees What Doctors Don’t or Won’t
Back to the Beginning: Series Overview: “Trust, Bias, and the Algorithm: Rethinking AI in Women’s Healthcare”



"Because in healthcare, data is power.
And we can’t afford to keep feeding our future with our past LARGELY UNDOCUMENTED mistakes." The examples in this piece are worth paying attention to since when the patient is given a sedative instead of seeing a cardiologist for example...the EMR rarely flags this as an error and thus AI will not see it as such. So it is not just that we in healthcare make mistakes...an important point is that they can be largely undocumented...a fatal flaw in the road to AI uptopia
This conversation is long overdue. How many more women and minorities will be subjected to subpar healthcare when AI becomes ingrained and doctors don’t question the AI outputs. Over reliance and complacency is dangerous when involved AI in most fields but evermore so in healthcare.