• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!

Brianna White

Administrator
Staff member
Jul 30, 2019
4,655
3,454
Almost every company understands the value that artificial intelligence (AI) or machine learning (ML) can bring to their business, but for many, the potential risks of adding AI do not outweigh the benefits. Report after report consistently ranks AI as critically important to C-suite executives. To remain competitive means streamlining processes, increasing efficiency and improving outcomes, all of which can be achieved through AI and ML decisioning.
Despite the value that AI and ML bring, a lack of trust or fear that the technology will open businesses to more risk has slowed the implementation of AI/ML decisioning. This isn’t wholly unfounded—the risk of biased decisions in highly regulated industries and applications, like insurance eligibility, mortgage lending or talent acquisition, has been the subject of several new laws focused on the “right to explainability.” Earlier this year, congress proposed the Algorithmic Accountability Act. Overseas, the European Union is pushing for stricter AI regulations abroad, as well. These laws, and the “right to explainability” movement in general, are a reaction to mistrust of AI/ML decisions.
In fact, ethical worries around AI and ML impede the use of AI/ML decisioning. Research from Forrester commissioned by InRule discovered that AI/ML leaders are fearful that bias could negatively impact their bottom line.
To solve this problem, businesses must rethink their goals for AI/ML decisioning. For too long, many outside of the AI/ML field have seen technology as the replacement for human intelligence instead of the amplification of it. In removing humans from the decision-making loop, we are increasing the chance of bias and inaccurate and potentially costly decisions.
Continue reading: https://www.forbes.com/sites/forbestechcouncil/2022/09/02/ai-is-for-human-empowerment-so-why-are-we-cutting-humans-out/?sh=16d1599b4400
 

Attachments

  • p0008898.m08486.960x0_2022_09_06t094325_589.jpg
    p0008898.m08486.960x0_2022_09_06t094325_589.jpg
    79.1 KB · Views: 40
  • Like
Reactions: Brianna White