In September last year, Google’s cloud unit looked into using artificial intelligence to help a financial firm decide whom to lend money to.
It turned down the client’s idea after weeks of internal discussions, deeming the project too ethically dicey because the AI technology could perpetuate biases like those around race and gender.
Since early last year, Google has also blocked new AI features analyzing emotions, fearing cultural insensitivity, while Microsoft restricted software mimicking voices and IBM rejected a client request for an advanced facial-recognition system.
All these technologies were curbed by panels of executives or other leaders, according to interviews with AI ethics chiefs at the three U.S. technology giants.
Reported here for the first time, their vetoes and the deliberations that led to them reflect a nascent industry-wide drive to balance the pursuit of lucrative AI systems with a greater consideration of social responsibility.
“There are opportunities and harms, and our job is to maximize opportunities and minimize harms,” said Tracy Pizzo Frey, who sits on two ethics committees at Google Cloud as its managing director for Responsible AI.
Judgments can be difficult.
Microsoft, for instance, had to balance the benefit of using its voice mimicry tech to restore impaired people’s speech against risks such as enabling political deepfakes, said Natasha Crampton, the company’s chief responsible AI officer.
Continue reading: https://www.claimsjournal.com/news/national/2021/09/09/305872.htm
It turned down the client’s idea after weeks of internal discussions, deeming the project too ethically dicey because the AI technology could perpetuate biases like those around race and gender.
Since early last year, Google has also blocked new AI features analyzing emotions, fearing cultural insensitivity, while Microsoft restricted software mimicking voices and IBM rejected a client request for an advanced facial-recognition system.
All these technologies were curbed by panels of executives or other leaders, according to interviews with AI ethics chiefs at the three U.S. technology giants.
Reported here for the first time, their vetoes and the deliberations that led to them reflect a nascent industry-wide drive to balance the pursuit of lucrative AI systems with a greater consideration of social responsibility.
“There are opportunities and harms, and our job is to maximize opportunities and minimize harms,” said Tracy Pizzo Frey, who sits on two ethics committees at Google Cloud as its managing director for Responsible AI.
Judgments can be difficult.
Microsoft, for instance, had to balance the benefit of using its voice mimicry tech to restore impaired people’s speech against risks such as enabling political deepfakes, said Natasha Crampton, the company’s chief responsible AI officer.
Continue reading: https://www.claimsjournal.com/news/national/2021/09/09/305872.htm