K

Kathleen Martin

Guest
This article was written by Micaela Kaplan, Ethics in AI Lead, CallMiner
Artificial intelligence (AI) and machine learning (ML) have become ubiquitous in our everyday lives. From self-driving cars to our social media feeds, AI has helped our world operate faster than it ever has, and that’s a good thing — for the most part.
As these technologies integrate into our everyday lives, so too have the many questions around the ethics of using and creating these technologies. AI tools are models and algorithms that have been built on real-world data, so they reflect real-world injustices like racism, misogyny, and homophobia, along with many others. This data leads to models that perpetuate existing stereotypes, reinforce the subordination of certain groups of people to the majority population, or unfairly delegate resources or access to services. All these outcomes cause major repercussions for both consumers and businesses alike.
While many companies have begun recognizing these potential problems in their AI solutions, only a few have begun building the structures and policies to address them. The fact is that AI and social justice can no longer operate as two separate worlds. They need the influence of each other to create tools that will help us build the world we want to see. Addressing the ethical questions surrounding AI and understanding our social responsibilities is a complicated process that involves the challenging work and dedication of many people. Below are a few actionable things to keep in mind as you begin the journey towards responsible AI.
Create a space that allows people to voice their questions and concerns
When studying ethics in any capacity, facing uncomfortable truths comes with the territory. The strongest teams in the fight for responsible AI are those that are honest with themselves. These teams acknowledge the biases that appear in their data, their models, and themselves. They also consider how these biases affect the world around them. Noticing and acting on the biases and impacts requires honest group discussion.
Dedicating the time and space to have these conversations is critical in ensuring that these conversations can be just that — conversations. As teams, we need to create spaces that allow us to speak freely on topics that might be controversial without fear of consequences. This fundamentally requires the support of executives. Sometimes, it might be easier to have a team meet and discuss without executives and then present the group’s ideas to the executives later. This level of anonymity can help provide a sense of security, because ideas presented on behalf of the team cannot be traced back to a single person. Allowing for open communication and honest feedback is what allows us to confront these questions productively. In the fight for ethical AI, it’s not a team against each other; it’s the team against the potential problems in the model.
Continue reading: https://venturebeat.com/2021/08/28/4-considerations-when-taking-responsibility-for-responsible-ai/
 

Attachments

  • p0004463.m04136.gettyimages_1158763612_e1625248168247_1.jpg
    p0004463.m04136.gettyimages_1158763612_e1625248168247_1.jpg
    65 KB · Views: 71
  • Like
Reactions: Kathleen Martin