AI has the potential to deliver enormous business value for organizations, and its adoption has been sped up by the data-related challenges of the pandemic. Forrester estimates that almost 100% of organizations will be using AI by 2025, and the artificial intelligence software market will reach $37 billion by the same year.
But there is growing concern around AI bias — situations where AI makes decisions that are systematically unfair to particular groups of people. Researchers have found that AI bias has the potential to cause real harm.
I recently had the chance to speak with Ted Kwartler, VP of Trusted AI at DataRobot, to get his thoughts on how AI bias occurs and what companies can do to make sure their models are fair.
Why AI Bias Happens
AI bias occurs because human beings choose the data that algorithms use, and also decide how the results of those algorithms will be applied. Without extensive testing and diverse teams, it is easy for unconscious biases to enter machine learning models. Then AI systems automate and perpetuate those biased models.
For example, a US Department of Commerce study found that facial recognition AI often misidentifies people of color. If law enforcement uses facial recognition tools, this bias could lead to wrongful arrests of people of color.
Several mortgage algorithms in financial services companies have also consistently charged Latino and Black borrowers higher interest rates, according to a study by UC Berkeley.
Kwartler says the business impact of biased AI can be substantial, particularly in regulated industries. Any missteps can result in fines, or could risk a company’s reputation. Companies that need to attract customers must find ways to put AI models into production in a thoughtful way, as well as test their programs to identify potential bias.
Continue reading: https://www.forbes.com/sites/bernardmarr/2022/09/30/the-problem-with-biased-ais-and-how-to-make-ai-better/?sh=6513f40e4770
But there is growing concern around AI bias — situations where AI makes decisions that are systematically unfair to particular groups of people. Researchers have found that AI bias has the potential to cause real harm.
I recently had the chance to speak with Ted Kwartler, VP of Trusted AI at DataRobot, to get his thoughts on how AI bias occurs and what companies can do to make sure their models are fair.
Why AI Bias Happens
AI bias occurs because human beings choose the data that algorithms use, and also decide how the results of those algorithms will be applied. Without extensive testing and diverse teams, it is easy for unconscious biases to enter machine learning models. Then AI systems automate and perpetuate those biased models.
For example, a US Department of Commerce study found that facial recognition AI often misidentifies people of color. If law enforcement uses facial recognition tools, this bias could lead to wrongful arrests of people of color.
Several mortgage algorithms in financial services companies have also consistently charged Latino and Black borrowers higher interest rates, according to a study by UC Berkeley.
Kwartler says the business impact of biased AI can be substantial, particularly in regulated industries. Any missteps can result in fines, or could risk a company’s reputation. Companies that need to attract customers must find ways to put AI models into production in a thoughtful way, as well as test their programs to identify potential bias.
Continue reading: https://www.forbes.com/sites/bernardmarr/2022/09/30/the-problem-with-biased-ais-and-how-to-make-ai-better/?sh=6513f40e4770