• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!

Brianna White

Administrator
Staff member
Jul 30, 2019
4,655
3,454
Today, businesses are increasingly relying on digital technologies such as Artificial Intelligence (AI) to remain competitive. Artificial intelligence however requires a high level of trust because of questions surrounding its fairness, explainability, and security.  Various stakeholders must trust AI systems before businesses can scale their AI deployments. The lack of trust can be the biggest obstacle to the widespread adoption of AI. We sat down with Sameep Mehta – IBM Distinguished Engineer and Lead – Data and AI Platforms, IBM Research India, to understand why Trust in AI matters in a digital world and how IBM is helping companies achieve greater trust, transparency and confidence in business predictions and outcomes by applying the industry’s most comprehensive data and AI solutions.  
Why Trust matters in AI in the digital-risk world?
AI is expected to be a multi trillion-dollar market opportunity in the next decade. Almost all organizations want to leverage AI to improve existing processes and to open new channels of revenue. However, lack of trust, transparency and governance of AI systems could be one major impediment to realize it’s true potential.
According to IBM’s Global AI Adoption Index 2022, while organizations have embraced AI, few have made tangible investments in ways to ensure trust or address bias. Four out of five businesses believe it is important to be able to describe how their AI made a decision.  In fact, majority of organizations haven’t taken key steps to ensure AI is both trustworthy and responsible, including reducing bias (74%), tracking performance variations/model drift (68%), and explaining AI-powered decisions (61%). Therefore, we need to embrace the right set of tools, learn the skills, and overall increase the awareness to embed trust into complete Data and AI Lifecycle.  
What are the barriers that CIOs face when it comes to introducing AI adoption in the enterprise and ensuring trustworthy AI systems?
In order to build trusted AI systems, we must address three key challenges. First, we need to educate and train technical and business leaders, so they understand trustworthy AI.  Leadership must recognize that trust in AI systems is a must-have capability and paying lip service will harm the organization’s overall AI initiatives.  Secondly, the CIO team should provide developers with trusted AI systems that is best in class. Libraries for checking models for bias, providing explanations, generating audit trails, etc., could be included in the overall DevOps toolchain managed by the CIO.  Finally, the Trust in AI should be simple and easily understandable across the organization. Read more at: https://www.cxotoday.com/interviews/why-trust-matters-in-ai-for-business/
 

Attachments

  • p0008843.m08432.sameep_mehta.jpg
    p0008843.m08432.sameep_mehta.jpg
    64.1 KB · Views: 38
  • Like
Reactions: Brianna White