• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!

Brianna White

Administrator
Staff member
Jul 30, 2019
4,655
3,454
For as long as there has been technological progress, there have been concerns over its implications. The Manhattan Project, when scientists grappled with their role in unleashing such innovative, yet destructive, nuclear power is a prime example. Lord Solomon “Solly” Zuckerman was a scientific advisor to the Allies during World War 2, and afterward a prominent nuclear nonproliferation advocate. He was quoted in the 1960s with a prescient insight that still rings true today: “Science creates the future without knowing what the future will be.” 
Artificial intelligence (AI), now a catch-all term for any machine learning (ML) software designed to perform complex tasks that typically require human intelligence, is destined to play an outsized role in our future society. Its recent proliferation has led to an explosion in interest, as well as increased scrutiny on how AI is being developed and who is doing the developing, casting a light on how bias impacts design and function. The EU is planning new legislation aimed at mitigating potential harms that AI may bring about and responsible AI will be required by law.
It’s easy to understand why such guardrails are needed. Humans are building AI systems, so they inevitably bring their own view of ethics into the design, oftentimes for the worse. Some troubling examples have already emerged – the algorithm for the Apple card and job recruiting at Amazon were each investigated for gender bias, and Google [subscription required] had to retool its photo service after racist tagging. Each company has since fixed the issues, but the tech is moving fast, underscoring the lesson that building superior technology without accounting for risk is like sprinting blindfolded.
Building responsible AI
Melvin Greer, chief data scientist at Intel, pointed out in VentureBeat that “…experts in the area of responsible AI really want to focus on successfully managing the risks of AI bias, so that we create not only a system that is doing something that is claimed, but doing something in the context of a broader perspective that recognizes societal norms and morals.”
Continue reading: https://venturebeat.com/2022/07/01/building-responsible-ai-5-pillars-for-an-ethical-future/
 

Attachments

  • p0008458.m08074.5_pillars.jpg
    p0008458.m08074.5_pillars.jpg
    44.1 KB · Views: 36
  • Like
Reactions: Brianna White