Today, businesses are increasingly relying on digital technologies such as Artificial Intelligence (AI) to remain competitive. Artificial intelligence however requires a high level of trust because of questions surrounding its fairness, explainability, and security. Various stakeholders must trust AI systems before businesses can scale their AI deployments. The lack of trust can be the biggest obstacle to the widespread adoption of AI. We sat down with Sameep Mehta – IBM Distinguished Engineer and Lead – Data and AI Platforms, IBM Research India, to understand why Trust in AI matters in a digital world and how IBM is helping companies achieve greater trust, transparency and confidence in business predictions and outcomes by applying the industry’s most comprehensive data and AI solutions.
Why Trust matters in AI in the digital-risk world?
AI is expected to be a multi trillion-dollar market opportunity in the next decade. Almost all organizations want to leverage AI to improve existing processes and to open new channels of revenue. However, lack of trust, transparency and governance of AI systems could be one major impediment to realize it’s true potential.
According to IBM’s Global AI Adoption Index 2022, while organizations have embraced AI, few have made tangible investments in ways to ensure trust or address bias. Four out of five businesses believe it is important to be able to describe how their AI made a decision. In fact, majority of organizations haven’t taken key steps to ensure AI is both trustworthy and responsible, including reducing bias (74%), tracking performance variations/model drift (68%), and explaining AI-powered decisions (61%). Therefore, we need to embrace the right set of tools, learn the skills, and overall increase the awareness to embed trust into complete Data and AI Lifecycle.
What are the barriers that CIOs face when it comes to introducing AI adoption in the enterprise and ensuring trustworthy AI systems?
In order to build trusted AI systems, we must address three key challenges. First, we need to educate and train technical and business leaders, so they understand trustworthy AI. Leadership must recognize that trust in AI systems is a must-have capability and paying lip service will harm the organization’s overall AI initiatives. Secondly, the CIO team should provide developers with trusted AI systems that is best in class. Libraries for checking models for bias, providing explanations, generating audit trails, etc., could be included in the overall DevOps toolchain managed by the CIO. Finally, the Trust in AI should be simple and easily understandable across the organization. Read more at: https://www.cxotoday.com/interviews/why-trust-matters-in-ai-for-business/
Why Trust matters in AI in the digital-risk world?
AI is expected to be a multi trillion-dollar market opportunity in the next decade. Almost all organizations want to leverage AI to improve existing processes and to open new channels of revenue. However, lack of trust, transparency and governance of AI systems could be one major impediment to realize it’s true potential.
According to IBM’s Global AI Adoption Index 2022, while organizations have embraced AI, few have made tangible investments in ways to ensure trust or address bias. Four out of five businesses believe it is important to be able to describe how their AI made a decision. In fact, majority of organizations haven’t taken key steps to ensure AI is both trustworthy and responsible, including reducing bias (74%), tracking performance variations/model drift (68%), and explaining AI-powered decisions (61%). Therefore, we need to embrace the right set of tools, learn the skills, and overall increase the awareness to embed trust into complete Data and AI Lifecycle.
What are the barriers that CIOs face when it comes to introducing AI adoption in the enterprise and ensuring trustworthy AI systems?
In order to build trusted AI systems, we must address three key challenges. First, we need to educate and train technical and business leaders, so they understand trustworthy AI. Leadership must recognize that trust in AI systems is a must-have capability and paying lip service will harm the organization’s overall AI initiatives. Secondly, the CIO team should provide developers with trusted AI systems that is best in class. Libraries for checking models for bias, providing explanations, generating audit trails, etc., could be included in the overall DevOps toolchain managed by the CIO. Finally, the Trust in AI should be simple and easily understandable across the organization. Read more at: https://www.cxotoday.com/interviews/why-trust-matters-in-ai-for-business/