• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!

Brianna White

Administrator
Staff member
Jul 30, 2019
4,655
3,454
“With the amount of data today, we know there is no way we as human beings can process it all…The only technique we know that can harvest insight from the data, is artificial intelligence,” IBM CEO Arvind Krishna recently told the Wall Street Journal.
The insights to which Krishna is referring are patterns in the data that can help companies make predictions, whether that’s the likelihood of someone defaulting on a mortgage, the probability of developing diabetes within the next two years, or whether a job candidate is a good fit. More specifically, AI identifies mathematical patterns found in thousands of variables and the relations among those variables. These patterns can be so complex that they can defy human understanding.
This can create a problem: While we understand the variables we put into the AI (mortgage applications, medical histories, resumes) and understand the outputs (approved for the loan, has diabetes, worthy of an interview), we might not understand what’s going on between the inputs and the outputs. The AI can be a “black box,” which often renders us unable to answer crucial questions about the operations of the “machine”: Is it making reliable predictions? Is it making those predictions on solid or justified grounds? Will we know how to fix it if it breaks? Or more generally: can we trust a tool whose operations we don’t understand, particularly when the stakes are high?
To the minds of many, the need to answer these questions leads to the demand for explainable AI: in short, AI whose predictions we can explain.
What Makes an Explanation Good?
A good explanation should be intelligible to its intended audience, and it should be useful, in the sense that it helps that audience achieve their goals. When it comes to explainable AI, there are a variety of stakeholders that might need to understand how an AI made a decision: regulators, end-users, data scientists, executives charged with protecting the organization’s brand, and impacted consumers, to name a few. All of these groups have different skill sets, knowledge, and goals — an average citizen wouldn’t likely understand a report intended for data scientists.
Continue reading: https://hbr.org/2022/08/when-and-why-you-should-explain-how-your-ai-works
 

Attachments

  • p0008880.m08469.aug22_31_1308454794.jpg
    p0008880.m08469.aug22_31_1308454794.jpg
    247.8 KB · Views: 39
  • Like
Reactions: Brianna White