Brianna White

Administrator
Staff member
Jul 30, 2019
4,656
3,456
The question of how to use artificial intelligence (AI) tools responsibly and without bias remains largely unanswered. As you develop your AI strategy, consider these ethical best practices.
The question of how to use AI responsibly has been a hot topic for some time, yet little has been done to implement regulations or ethical standards. To start seeing real industry change, we need to shift from simply discussing the risks of unbridled AI to implementing concrete practices and tools.
Here are three steps practitioners can take to make responsible AI a priority today.
1. Check for model robustness
AI models can be sensitive. Something as minor as capitalization can affect a model’s ability to process data accurately. Accurate results are foundational to responsible AI, especially in industries like healthcare. For example, a model should understand that reducing the dose of a medication is a positive change, regardless of the other content presented.
Tools like CheckList, an open source resource, look at failure rates for natural language processing (NLP) models that aren’t typically considered. By generating a variety of tests, CheckList can generate model robustness and fix errors automatically. Sometimes it’s as easy as introducing a more pronounced sentiment to training data – “I like ice cream VERY much” instead of “I like ice cream” – to train the models. While different statements, the model can be trained that both are positive.
Continue reading: https://enterprisersproject.com/article/2022/11/artificial-intelligence-prioritize-responsible-practices
 

Attachments

  • p0009451.m08994.ai_practices.jpg
    p0009451.m08994.ai_practices.jpg
    17.9 KB · Views: 77