AI has fueled efficiencies across industries for years. It's old news by now, but as I've said before, that's a good thing.
Conversations about AI sound much different today than they did 10 years ago. Instead of wondering whether AI will help businesses grow or increase bottom lines, the proliferation of the technology has pushed AI conversations in more meaningful and complex directions. One area I'm particularly interested in is data privacy and biases in AI models.
You might remember the Plaid class-action lawsuit or the racial bias in Twitter's image-cropping tool. One is an instance of an AI algorithm collecting unnecessary customer data, while the other is a case involving biased AI decision-making. Algorithms themselves often lead to biased AI-powered decisions. However, these biases—which are oftentimes unconscious—stem from the humans who develop and train the algorithms.
Still, there's no excuse for unethical AI practices. You won't change my mind on that.
What exactly is unethical AI?
When I received my iPhone X in 2017, I couldn't wait to try the facial recognition feature. However, after many failed attempts, Apple's authentication technology wouldn't work. As an Asian American, I feared the technology's inability to recognize my face was related to my race. It turns out I wasn't the only person with the issue—Apple was accused of failing to train its AI model with broad enough sample data to recognize and distinguish people of color.
While unintentional, this issue created a subpar experience that frustrated many iPhone users—myself included. Let's take this scenario a step further. What if we start to power medical diagnoses with AI? Or self-driving cars, which is already happening? The consequences of these oversights become much graver and potentially life-threatening.
Continue reading: https://www.forbes.com/sites/forbestechcouncil/2022/06/09/your-ai-practices-might-not-be-ethical/?sh=1a30750071d6
Conversations about AI sound much different today than they did 10 years ago. Instead of wondering whether AI will help businesses grow or increase bottom lines, the proliferation of the technology has pushed AI conversations in more meaningful and complex directions. One area I'm particularly interested in is data privacy and biases in AI models.
You might remember the Plaid class-action lawsuit or the racial bias in Twitter's image-cropping tool. One is an instance of an AI algorithm collecting unnecessary customer data, while the other is a case involving biased AI decision-making. Algorithms themselves often lead to biased AI-powered decisions. However, these biases—which are oftentimes unconscious—stem from the humans who develop and train the algorithms.
Still, there's no excuse for unethical AI practices. You won't change my mind on that.
What exactly is unethical AI?
When I received my iPhone X in 2017, I couldn't wait to try the facial recognition feature. However, after many failed attempts, Apple's authentication technology wouldn't work. As an Asian American, I feared the technology's inability to recognize my face was related to my race. It turns out I wasn't the only person with the issue—Apple was accused of failing to train its AI model with broad enough sample data to recognize and distinguish people of color.
While unintentional, this issue created a subpar experience that frustrated many iPhone users—myself included. Let's take this scenario a step further. What if we start to power medical diagnoses with AI? Or self-driving cars, which is already happening? The consequences of these oversights become much graver and potentially life-threatening.
Continue reading: https://www.forbes.com/sites/forbestechcouncil/2022/06/09/your-ai-practices-might-not-be-ethical/?sh=1a30750071d6