K
Kathleen Martin
Guest
Earlier this month, the Equal Employment Opportunity Commission (EEOC) held a webinar on artificial intelligence (AI) in the workplace. Commissioner Keith Sonderling explained that the EEOC is monitoring employers’ use of such technology in the workplace to ensure compliance with anti-discrimination laws. The agency recognizes the potential for AI to mitigate unlawful human bias, but is wary of rapid, undisciplined implementation that may perpetuate or accelerate such bias. Sonderling remarked that the EEOC may use Commissioner charges—agency-initiated investigations unconnected to an employee’s charge of discrimination—to ensure employers’ are not using AI in an unlawful manner, particularly under the rubric of disparate impact claims.
The EEOC’s interest in this topic is not new. The agency previously held a public meeting in October 2016 discussing the use of big data in the workplace and the implications for employment law practitioners. But the most recent webinar likely reflects the EEOC’s response to a November 2020 letter, authored by ten U.S. Senators, asking the agency to focus on employers’ use of artificial intelligence, machine-learning, and other hiring technologies that may result in discrimination. We previously blogged about this letter here.
Many attorneys and AI commentators agree that AI, such as automated candidate sourcing, resume screening, or video interview analysis, is not a panacea for employment discrimination. The technology, if not carefully implemented and monitored, can introduce and even exacerbate unlawful bias. This is because algorithms generally rely on a set of human inputs, such as resumes of high-performing existing employees, to guide their analysis of candidates. If those inputs lack diversity, the algorithm may reinforce existing institutional bias at breakneck speed. This can lead to claims of disparate impact discrimination. The EEOC would most assuredly take a heightened interest in any such claims.
Although the EEOC has flagged these issues, it has not yet issued written guidance on the use of AI in employment decisions. In his remarks, Sonderling confirmed that the most relevant guidance document is over 40 years old. He was referring to the EEOC’s 1978 Uniform Guidelines on Employee Selection Procedures. That guidance, written in the wake of the 1960s civil rights movement, outlines different ways employers can show that employment tests and other selection criteria are job-related and consistent with business necessity. Although dated, the same principles that justified the validity of selection procedures in the 1970s can guide employers using AI today. One such method, called the 80% rule, explains that a selection rate for any race, sex, or ethnic group which is less than eighty percent (80%) of the selection rate for the group with the highest selection rate constitutes a “substantially different rate of selection,” indicating possible disparate impact. According to the Uniform Guidelines, this rule of thumb may be used by employers to test AI tools prior to implementation and to regularly audit such tools after implementation.
Continue reading: https://www.natlawreview.com/article/employers-beware-eeoc-monitoring-use-artificial-intelligence
The EEOC’s interest in this topic is not new. The agency previously held a public meeting in October 2016 discussing the use of big data in the workplace and the implications for employment law practitioners. But the most recent webinar likely reflects the EEOC’s response to a November 2020 letter, authored by ten U.S. Senators, asking the agency to focus on employers’ use of artificial intelligence, machine-learning, and other hiring technologies that may result in discrimination. We previously blogged about this letter here.
Many attorneys and AI commentators agree that AI, such as automated candidate sourcing, resume screening, or video interview analysis, is not a panacea for employment discrimination. The technology, if not carefully implemented and monitored, can introduce and even exacerbate unlawful bias. This is because algorithms generally rely on a set of human inputs, such as resumes of high-performing existing employees, to guide their analysis of candidates. If those inputs lack diversity, the algorithm may reinforce existing institutional bias at breakneck speed. This can lead to claims of disparate impact discrimination. The EEOC would most assuredly take a heightened interest in any such claims.
Although the EEOC has flagged these issues, it has not yet issued written guidance on the use of AI in employment decisions. In his remarks, Sonderling confirmed that the most relevant guidance document is over 40 years old. He was referring to the EEOC’s 1978 Uniform Guidelines on Employee Selection Procedures. That guidance, written in the wake of the 1960s civil rights movement, outlines different ways employers can show that employment tests and other selection criteria are job-related and consistent with business necessity. Although dated, the same principles that justified the validity of selection procedures in the 1970s can guide employers using AI today. One such method, called the 80% rule, explains that a selection rate for any race, sex, or ethnic group which is less than eighty percent (80%) of the selection rate for the group with the highest selection rate constitutes a “substantially different rate of selection,” indicating possible disparate impact. According to the Uniform Guidelines, this rule of thumb may be used by employers to test AI tools prior to implementation and to regularly audit such tools after implementation.
Continue reading: https://www.natlawreview.com/article/employers-beware-eeoc-monitoring-use-artificial-intelligence