AArtificial intelligence and medical algorithms are closely linked to our modern healthcare system. These technologies mimic the thought processes of doctors to make medical decisions and are designed to help providers determine who needs care. But a big problem with artificial intelligence is that it very often replicates the biases and blind spots of the humans who create them.
Researchers and doctors have warned that the algorithms used to determine who gets kidney transplants, heart surgeries and breast cancer diagnoses display racial prejudice. These issues can lead to harmful care that, in some cases, can put the health of millions of patients at risk.
So how exactly does bias infiltrate these algorithms? And what can be done to prevent it?
In this episode we hear Casey RossSTAT’s national health technology correspondent, on his reporting on racial bias in AI. Chris Hemphill, the Vice President of Applied AI and Growth at Actium Health, tells us about the rise of responsible AI in healthcare. Ziad Obermeryeremergency physician and researcher at the UC Berkeley School of Public Health, tells us how his team found a bias in an algorithm widely used in our healthcare system and a case where AI was used to correct an injustice in care health.
A transcript of this episode is available here.
You can subscribe to “Color Code” on Apple podcast, Spotify, Google Podcasts, SoundCloud, and elsewhere. New episodes will be released every other Monday.
For more on some of the topics covered in the episode:
And check out some of STAT’s coverage on the subject:
This podcast was made possible with support from the Commonwealth Fund.