And we can’t afford to keep feeding our future with our past LARGELY UNDOCUMENTED mistakes." The examples in this piece are worth paying attention to since when the patient is given a sedative instead of seeing a cardiologist for example...the EMR rarely flags this as an error and thus AI will not see it as such. So it is not just that we in healthcare make mistakes...an important point is that they can be largely undocumented...a fatal flaw in the road to AI uptopia
Absolutely!! The most dangerous mistakes in medicine aren’t just frequent; they’re invisible. When a woman gets a sedative instead of a cardiology referral, the EMR logs "anxiety," not "misdiagnosis." So the AI sees that pathway as successful care.
This is the quiet tragedy of training AI on unflagged failures: We’re not just encoding bias; we’re encoding false confidence. And unless we start auditing what's missing from the data, we’re building systems that will scale silence.
This conversation is long overdue. How many more women and minorities will be subjected to subpar healthcare when AI becomes ingrained and doctors don’t question the AI outputs. Over reliance and complacency is dangerous when involved AI in most fields but evermore so in healthcare.
Exactly this. The same systems that erased women's pain are now being encoded into AI, quietly, automatically. Until the data is fixed, the future will keep repeating the past.
"Because in healthcare, data is power.
And we can’t afford to keep feeding our future with our past LARGELY UNDOCUMENTED mistakes." The examples in this piece are worth paying attention to since when the patient is given a sedative instead of seeing a cardiologist for example...the EMR rarely flags this as an error and thus AI will not see it as such. So it is not just that we in healthcare make mistakes...an important point is that they can be largely undocumented...a fatal flaw in the road to AI uptopia
Absolutely!! The most dangerous mistakes in medicine aren’t just frequent; they’re invisible. When a woman gets a sedative instead of a cardiology referral, the EMR logs "anxiety," not "misdiagnosis." So the AI sees that pathway as successful care.
This is the quiet tragedy of training AI on unflagged failures: We’re not just encoding bias; we’re encoding false confidence. And unless we start auditing what's missing from the data, we’re building systems that will scale silence.
This conversation is long overdue. How many more women and minorities will be subjected to subpar healthcare when AI becomes ingrained and doctors don’t question the AI outputs. Over reliance and complacency is dangerous when involved AI in most fields but evermore so in healthcare.
Exactly this. The same systems that erased women's pain are now being encoded into AI, quietly, automatically. Until the data is fixed, the future will keep repeating the past.