According to the 2022 IBM Institute for Business Value study on AI Ethics in Action, building trustworthy Artificial Intelligence (AI) is perceived as a strategic differentiator and organizations are beginning to implement AI ethics mechanisms.
Seventy-five percent of respondents believe that ethics is a source of competitive differentiation. More than 67% of respondents who view AI and AI ethics as important indicate that their organizations outperform their peers in sustainability, social responsibility, and diversity and inclusion.
The survey showed that 79% of CEOs are prepared to embed AI ethics into their AI practices, up from 20% in 2018, but less than a quarter of responding organizations have operationalized AI ethics. Less than 20% of respondents strongly agreed that their organization's practices and actions match (or exceed) their stated principles and values.
Peter Bernard, CEO of Datagration, says that understanding AI gives companies an advantage, but Bernard adds that explainable AI allows businesses to optimize their data.
"Not only are they able to explain and understand the AI/ML behind predictions, but when errors arise, they can understand where to go back and make improvements," said Bernard. "A deeper understanding of AI/ML allows businesses to know whether their AI/ML is making valuable predictions or whether they should be improved."
Bernard believes this can ensure incorrect data is spotted early on and stopped before decisions are made.
Avivah Litan, vice president and distinguished analyst at Gartner, says that explainable AI also furthers scientific discovery as scientists and other business users can explore what the AI model does in various circumstances.
"They can work with the models directly instead of relying only on what predictions are generated given a certain set of inputs," said Litan.
But John Thomas, Vice President and Distinguished Engineer in IBM Expert Labs, says at its very basic level, explainable AI are the methods and processes for helping us understand a model's output. "In other words, it's the effort to build AI that can explain to designers and users why it made the decision it did based on the data that was put into it," said Thomas.
Continue reading: https://www.forbes.com/sites/jenniferhicks/2022/07/28/explainable-ai-is--trending-and-heres-why/?sh=1409fcba2008
Seventy-five percent of respondents believe that ethics is a source of competitive differentiation. More than 67% of respondents who view AI and AI ethics as important indicate that their organizations outperform their peers in sustainability, social responsibility, and diversity and inclusion.
The survey showed that 79% of CEOs are prepared to embed AI ethics into their AI practices, up from 20% in 2018, but less than a quarter of responding organizations have operationalized AI ethics. Less than 20% of respondents strongly agreed that their organization's practices and actions match (or exceed) their stated principles and values.
Peter Bernard, CEO of Datagration, says that understanding AI gives companies an advantage, but Bernard adds that explainable AI allows businesses to optimize their data.
"Not only are they able to explain and understand the AI/ML behind predictions, but when errors arise, they can understand where to go back and make improvements," said Bernard. "A deeper understanding of AI/ML allows businesses to know whether their AI/ML is making valuable predictions or whether they should be improved."
Bernard believes this can ensure incorrect data is spotted early on and stopped before decisions are made.
Avivah Litan, vice president and distinguished analyst at Gartner, says that explainable AI also furthers scientific discovery as scientists and other business users can explore what the AI model does in various circumstances.
"They can work with the models directly instead of relying only on what predictions are generated given a certain set of inputs," said Litan.
But John Thomas, Vice President and Distinguished Engineer in IBM Expert Labs, says at its very basic level, explainable AI are the methods and processes for helping us understand a model's output. "In other words, it's the effort to build AI that can explain to designers and users why it made the decision it did based on the data that was put into it," said Thomas.
Continue reading: https://www.forbes.com/sites/jenniferhicks/2022/07/28/explainable-ai-is--trending-and-heres-why/?sh=1409fcba2008