• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!

Brianna White

Administrator
Staff member
Jul 30, 2019
4,655
3,454
Medical tools enabled with artificial intelligence (AI) can greatly improve health care by helping with problem-solving, automating tasks, and identifying patterns. AI can aid in diagnoses, predict patients’ risk of developing certain illnesses, and help determine the communities most in need of preventative care. But AI products can also replicate—and worse, exacerbate—existing biases and disparities in patients’ care and health outcomes. Alleviating these biases requires first understanding where they come from.
Christina Silcox, Ph.D., research director for digital health at the Duke-Margolis Center for Health Policy, and her team identified four sources of AI bias and ways to mitigate them in a recent report funded by The Pew Charitable Trusts. She spoke with Pew about the findings.
How does bias in AI affect health care?
AI tools do not have to be biased. But if we're not paying careful attention, we teach these tools to reflect existing biases in humans and health care, which leads to continued or worsened health inequities. It’s particularly concerning if this happens in groups that have historically gotten poor care, such as those who have been affected by structural racism and institutionalized disparities like lack of access to affordable health insurance and care. For example, when machine-learning-based tools are trained on real-world health data that reflects inequitable care, we run the risk of perpetuating and even scaling up those inequities. Those tools may end up leading to slower diagnoses or incomplete treatments, decreased access, or reduced overall quality of care—and that’s the last thing we want to do.
Continue reading: https://www.pewtrusts.org/en/research-and-analysis/articles/2022/08/24/how-to-understand-and-fix-bias-in-artificial-intelligence-enabled-health-tools
 

Attachments

  • p0008788.m08380.1x1_s.jpg
    p0008788.m08380.1x1_s.jpg
    57.6 KB · Views: 40
  • Like
Reactions: Brianna White