Brianna White

Administrator
Staff member
Jul 30, 2019
4,656
3,456
Artificial intelligence has evolved rapidly during the last few years and is being applied across industries for endless use cases as a powerful and innovative tool. However, great responsibility comes with great power. Thanks to AI and machine learning (ML), fraud prevention is now more accurate and evolving faster than ever. Real-time scoring technology allows business leaders to detect fraud instantly; however, the use of AI- and ML-driven decision-making has also drawn transparency concerns. Further, the need for explainability arises when ML models appear in high-risk environments.
Explainability and interpretability are getting more important, as the number of crucial decisions made by machines is increasing. "Interpretability is the degree to which a human can understand the cause of a decision," said tech researcher Tim Miller. Thus, evolving interpretability of ML models is crucial and leads to well-trusted automated solutions.
Developers, consumers, and leaders should be aware of the meaning and process of fraud prevention decision-making. Any ML model that exceeds a handful of parameters is complex for most people to understand. However, the explainable AI research community has repeatedly stated that black-box models are not black box anymore due to the development of interpretation tools. With the help of such tools, users are able to understand, and trust ML models more that make important decisions.
Continue reading: https://www.darkreading.com/analytics/explainable-ai-for-fraud-prevention
 

Attachments

  • p0007816.m07460.ai_prevent_fraud.jpg
    p0007816.m07460.ai_prevent_fraud.jpg
    82.9 KB · Views: 40
  • Like
Reactions: Brianna White