Artificial intelligence researchers and industry leaders explored what it means to center individuals, communities and society in areas like healthcare and hospitality during a Stanford Institute for Human-Centered Artificial Intelligence (HAI) conference on Tuesday.
The human-in-the-loop model, unlike autonomous or semi-autonomous AI, is an AI approach that involves human feedback and decision-making across several stages. HAI aims to center people beyond just being “in-the-loop.”
Computer science professor and faculty director of research at HAI James Landay emphasized in the introduction that noticing and critiquing “the potential and real harms of AI” is vital to improving AI but “recognizing the negative impacts of AI is not enough.”
Some AI technologists have focused on tackling problems within high impact areas by leveraging their technical expertise. However, as Landay explained, gaps and failures within AI have widespread consequences as they can lead to negative social impacts or unsolved problems.
As COVID-19 overwhelmed healthcare systems worldwide, AI experts created hundreds of predictive models to diagnose COVID and predict patient risk in 2020, Landay said.
A scientific review in 2021 revealed these models often missed the mark. A research team at Maastricht University in the Netherlands, led by epidemiologist Laure Wynants, assessed 232 algorithms and found zero to be viable for clinical use. Only two showed potential for future development.
Negative social impacts are often tied to algorithmic bias, such as when datasets reproduce systemic biases against women and gender marginalized people.
Continue reading: https://stanforddaily.com/2022/11/16/humans-as-the-keystone-an-emerging-approach-to-artificial-intelligence/
The human-in-the-loop model, unlike autonomous or semi-autonomous AI, is an AI approach that involves human feedback and decision-making across several stages. HAI aims to center people beyond just being “in-the-loop.”
Computer science professor and faculty director of research at HAI James Landay emphasized in the introduction that noticing and critiquing “the potential and real harms of AI” is vital to improving AI but “recognizing the negative impacts of AI is not enough.”
Some AI technologists have focused on tackling problems within high impact areas by leveraging their technical expertise. However, as Landay explained, gaps and failures within AI have widespread consequences as they can lead to negative social impacts or unsolved problems.
As COVID-19 overwhelmed healthcare systems worldwide, AI experts created hundreds of predictive models to diagnose COVID and predict patient risk in 2020, Landay said.
A scientific review in 2021 revealed these models often missed the mark. A research team at Maastricht University in the Netherlands, led by epidemiologist Laure Wynants, assessed 232 algorithms and found zero to be viable for clinical use. Only two showed potential for future development.
Negative social impacts are often tied to algorithmic bias, such as when datasets reproduce systemic biases against women and gender marginalized people.
Continue reading: https://stanforddaily.com/2022/11/16/humans-as-the-keystone-an-emerging-approach-to-artificial-intelligence/