Brianna White

Administrator
Staff member
Jul 30, 2019
4,656
3,456
An artificial intelligence (AI) algorithm designed to scan electronic medical records for potential clinical trial participants can perform at high accuracy in some cases. However, depending on the pool of patients, where they’re located, and what the trial is for, there are inherent biases in the selection process. Just because the algorithm performs a given task correctly doesn’t mean it does so in a responsible, ethical way.
One well-known example is Amazon’s sexist AI recruiting algorithm that prioritized hiring men over women. The algorithm learned from the company’s existing team -- not inaccurate information -- and was as flawed as the history used to train it. AI has great potential for good, but it is only as effective as the humans and data powering it. These biases may not mean much when it comes to verticals such as retail or to the ads you’re being served, but they can be a life-or-death matter in the healthcare industry.
Fortunately, as AI technology and tools are maturing, so, too, are best practices and regulatory frameworks around ethics. As GDPR is for data protection, the EU has proposed a legal framework for how to ensure AI tools are safer and more trustworthy for users, but we can’t wait until government mandated laws and best practices for AI are passed. For now, it’s on us -- the people who build these products and services -- to ensure AI-powered products and services are doing more good than harm.
Here are three priorities leaders should focus on to ensure their AI initiatives provide business value and do so ethically.
Continue reading: https://tdwi.org/articles/2022/08/22/adv-all-3-priorities-for-your-next-ai-initiative.aspx
 

Attachments

  • p0008749.m08341.ai1_1.jpg
    p0008749.m08341.ai1_1.jpg
    120.3 KB · Views: 49
  • Like
Reactions: Brianna White