• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!
K

Kathleen Martin

Guest
These days, artificial intelligence (AI) is becoming our version of the deus ex machina, promising to swoop in and solve our most pressing business problems. But, like the Greek gods, AI can be fickle and fallible.
AI has the potential to significantly improve the way we make decisions. It can also make recommendations that are unfair, harmful, and fundamentally wrong. There are a lot of ways bias can make it into our models, from poor data quality to spurious correlations.
Fortunately, though, by applying technological, ethical, and legal governance around the development and use of AI, we can significantly reduce the impact of bias in our models. 
Forms of bias 
There are two main kinds of bias in AI. 
The first is algorithmic bias, which comes from poor or unrepresentative training data. If we’re training our models to make decisions for a set of people, for example, but our training data does not apply to that population, then our results are going to be off. The second is societal bias, which comes from our own personal biases, assumptions, norms, and blind spots. 
Predictive policing tools are a useful example of both types of bias. Location-based policing algorithms draw on data about events, places, and crime rates to predict when and where crimes will happen. Demographic-based algorithms use data about people’s age, gender, history of substance abuse, marital status, and criminal record to predict who might commit a crime in the future. Dozens of cities in the U.S. use PredPol and COMPAS, the most common of these tools.  
However, predictive policing tools sometimes produce racist results. If these models are fed using data that is biased against people of color, they will produce outcomes that echo the bias. If we don’t bake in layers of governance to limit this kind of bias, we could see—and in fact, we are seeing—devastating real-world consequences. 
Continue reading: https://www.forbes.com/sites/servicenow/2021/11/05/governing-the-future-of-ai/?sh=2c2643ec2d2a
 

Attachments

  • p0005653.m05310.ai_follows.png
    p0005653.m05310.ai_follows.png
    576.6 KB · Views: 36
  • Like
Reactions: Kathleen Martin