Powered by digital transformation, there seems to be no ceiling to the heights organizations will reach in the next few years. One of the notable technologies helping enterprises scale these new heights is artificial intelligence (AI). But as AI advances with numerous use cases, there’s been the persistent problem of trust: AI is still not fully trusted by humans. At best, it’s under intense scrutiny and we’re still a long way from the human-AI synergy that’s the dream of data science and AI experts.
One of the underlying factors behind this disjointed reality is the complexity of AI. The other is the opaque approach AI-led projects often take to problem-solving and decision-making. To solve this challenge, several enterprise leaders looking to build trust and confidence in AI have turned their sights to explainable AI (also called XAI) models.
Explainable AI enables IT leaders — especially data scientists and ML engineers — to query, understand and characterize model accuracy and ensure transparency in AI-powered decision-making.
Why companies are getting on the explainable AI train
With the global explainable AI market size estimated to grow from $3.5 billion in 2020 to $21 billion by 2030, according to a report by ResearchandMarkets, it’s obvious that more companies are now getting on the explainable AI train. Alon Lev, CEO at Israel-based Qwak, a fully-managed platform that unifies machine learning (ML) engineering and data operations, told VentureBeat in an interview that this trend “may be directly related to the new regulations that require specific industries to provide more transparency about the model predictions.” The growth of explainable AI is predicated on the need to build trust in AI models, he said.
He further noted that another growing trend in explainable AI is the use of SHAP (SHapley Additive exPlanations) values — which is a game theoretic approach to explaining the outcome of ML models.
Continue reading: https://venturebeat.com/ai/why-the-explainable-ai-market-is-growing-rapidly/
One of the underlying factors behind this disjointed reality is the complexity of AI. The other is the opaque approach AI-led projects often take to problem-solving and decision-making. To solve this challenge, several enterprise leaders looking to build trust and confidence in AI have turned their sights to explainable AI (also called XAI) models.
Explainable AI enables IT leaders — especially data scientists and ML engineers — to query, understand and characterize model accuracy and ensure transparency in AI-powered decision-making.
Why companies are getting on the explainable AI train
With the global explainable AI market size estimated to grow from $3.5 billion in 2020 to $21 billion by 2030, according to a report by ResearchandMarkets, it’s obvious that more companies are now getting on the explainable AI train. Alon Lev, CEO at Israel-based Qwak, a fully-managed platform that unifies machine learning (ML) engineering and data operations, told VentureBeat in an interview that this trend “may be directly related to the new regulations that require specific industries to provide more transparency about the model predictions.” The growth of explainable AI is predicated on the need to build trust in AI models, he said.
He further noted that another growing trend in explainable AI is the use of SHAP (SHapley Additive exPlanations) values — which is a game theoretic approach to explaining the outcome of ML models.
Continue reading: https://venturebeat.com/ai/why-the-explainable-ai-market-is-growing-rapidly/