• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!

Brianna White

Administrator
Staff member
Jul 30, 2019
4,655
3,454
Powered by digital transformation, there seems to be no ceiling to the heights organizations will reach in the next few years. One of the notable technologies helping enterprises scale these new heights is artificial intelligence (AI). But as AI advances with numerous use cases, there’s been the persistent problem of trust: AI is still not fully trusted by humans. At best, it’s under intense scrutiny and we’re still a long way from the human-AI synergy that’s the dream of data science and AI experts.
One of the underlying factors behind this disjointed reality is the complexity of AI. The other is the opaque approach AI-led projects often take to problem-solving and decision-making. To solve this challenge, several enterprise leaders looking to build trust and confidence in AI have turned their sights to explainable AI (also called XAI) models.
Explainable AI enables IT leaders — especially data scientists and ML engineers — to query, understand and characterize model accuracy and ensure transparency in AI-powered decision-making.   
Why companies are getting on the explainable AI train
With the global explainable AI market size estimated to grow from $3.5 billion in 2020 to $21 billion by 2030, according to a report by ResearchandMarkets, it’s obvious that more companies are now getting on the explainable AI train. Alon Lev, CEO at Israel-based Qwak, a fully-managed platform that unifies machine learning (ML) engineering and data operations, told VentureBeat in an interview that this trend “may be directly related to the new regulations that require specific industries to provide more transparency about the model predictions.” The growth of explainable AI is predicated on the need to build trust in AI models, he said.
He further noted that another growing trend in explainable AI is the use of SHAP (SHapley Additive exPlanations) values — which is a game theoretic approach to explaining the outcome of ML models.
Continue reading: https://venturebeat.com/ai/why-the-explainable-ai-market-is-growing-rapidly/
 

Attachments

  • p0008964.m08547.explainable_ai.jpg
    p0008964.m08547.explainable_ai.jpg
    49.4 KB · Views: 38
  • Like
Reactions: Brianna White