Part II: Garbage In, Garbage Out: How AI Learns from the Broken Systems We're Trying to Fix
Why biased healthcare data leads to dangerous AI and what it means for women, pain, and misdiagnosis.
Part II: Garbage In, Garbage Out
How AI Learns from the Broken Systems We’re Trying to Fix
There’s an old computer science saying, short, snarky, and devastatingly accurate:
Garbage in, garbage out.
In artificial intelligence, it means that no matter how sophisticated your model is, if the data you feed it is flawed, the output will also be flawed. But in healthcare, the implications are more than theoretical. When the data is biased, the consequences are deadly.
We often talk about AI as if it’s objective, as if it represents a clean break from the messy, human tendencies that have shaped medicine's long history of inequity. But in reality, most AI systems in healthcare are trained on electronic health records (EHRs), clinical trial data, insurance claims, and physician notes. And those are anything but neutral.
These data sources reflect decades—centuries—of unequal treatment. They’re built on systems that have consistently misdiagnosed women, ignored pain in Black patients, erased trans …
Keep reading with a 7-day free trial
Subscribe to Rachel @ We're Trustable - AI, BPO, CX, and Trust to keep reading this post and get 7 days of free access to the full post archives.