Brianna White

Administrator
Staff member
Jul 30, 2019
4,656
3,456
In 2019 a study published in the journal Science found that artificial intelligence from Optum, which many health systems were using to spot high-risk patients who should receive follow-up care, was prompting medical professionals to pay more attention to white people than to Black people. Only 18% of the people identified by the AI were Black, while 82% were white. After reviewing data on the patients who were actually the sickest, the researchers calculated that the numbers should have been about 46% and 53%, respectively. The impact was far-reaching: The researchers estimated that the AI had been applied to at least 100 million patients.
While the data scientists and executives involved in creating the Optum algorithm never set out to discriminate against Black people, they fell into a shockingly common trap: training AI with data that reflects historical discrimination, resulting in biased outputs. In this particular case, the data that was used showed that Black people receive fewer health care resources, which caused the algorithm to mistakenly infer that they needed less help.
There are a lot of well-documented and highly publicized ethical risks associated with AI; unintended bias and invasions of privacy are just two of the most notable kinds. In many instances the risks are specific to particular uses, like the possibility that self-driving cars will run over pedestrians or that AI-generated social media newsfeeds will sow distrust of public institutions. In some cases they’re major reputational, regulatory, financial, and legal threats. Because AI is built to operate at scale, when a problem occurs, it affects all the people the technology engages with—for instance, everyone who responds to a job listing or applies for a mortgage at a bank. If companies don’t carefully address ethical issues in planning and executing AI projects, they can waste a lot of time and money developing software that is ultimately too risky to use or sell, as many have already learned.
Your organization’s AI strategy needs to take into account several questions: How might the AI we design, procure, and deploy pose ethical risks that cannot be avoided? How do we systematically and comprehensively identify and mitigate them? If we ignore them, how much time and labor would it take us to respond to a regulatory investigation? How large a fine might we pay if found guilty, let alone negligent, of violating regulations or laws? How much would we need to spend to rebuild consumer and public trust, provided that money could solve the problem?
Continue reading: https://hbr.org/2022/07/why-you-need-an-ai-ethics-committee
 

Attachments

  • p0008291.m07917.r2204j_barbe_1900x1068.jpg
    p0008291.m07917.r2204j_barbe_1900x1068.jpg
    257.9 KB · Views: 50
  • Like
Reactions: Brianna White