Brianna White

Administrator
Staff member
Jul 30, 2019
4,656
3,456
A group of civil rights, tech and other advocacy organizations called for the National Institute of Standards and Technology to recommend steps needed to ensure nondiscriminatory and equitable outcomes for all users of artificial intelligence systems in the final draft of its Proposal for Identifying and Managing Bias with AI.
The definition of model risk — traditionally thought of as the risk of financial loss when inaccurate AI models are used — should be expanded to include the risk of discriminatory and inequitable outcomes, wrote the group in its Friday response to NIST’s draft proposal.
NIST released the proposal for public comment on June 22 with the goal of helping AI designers and deployers mitigate social biases throughout the development lifecycle. But the letter from 34 organizations — including the NAACP, Southern Poverty Law Center and mostly those in the housing and consumer credit space — makes 12 recommendations for improvements to NIST’s proposal and process.
Continue reading: https://www.fedscoop.com/civil-rights-organizations-nist-ai/
 

Attachments

  • p0004766.m04439.nist.jpg
    p0004766.m04439.nist.jpg
    150.5 KB · Views: 61