EEOC reminds employers that Title VII applies to AI

No breaking news here, folks.  

The Equal Employment Opportunity Commission has issued a resource on artificial intelligence, Select Issues:  Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964. This new resource essentially just reminds employers that the same rules that apply to any other selection procedure also apply to AI.

Title VII prohibits employers from using tests or selection procedures that have an adverse impact – or a disproportionately large negative impact – on the basis of any protected characteristic. The Uniform Guidelines on Employee Selection Procedures address how employers should determine whether any selection process has an adverse impact. The EEOC published the Guidelines in 1978, so these rules are not new to employers. What is new is the extensive and expanding use of AI by employers, especially in making hiring decisions. 

The EEOC describes AI as follows:

In the employment context, using AI has typically meant that the developer relies partly on the computer’s own analysis of data to determine which criteria to use when making decisions. AI may include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems.

And provides examples, including the following:

  • Resume scanners that prioritize applications using certain keywords.
  • Employee monitoring software that rates employees on the basis of their keystrokes or other factors.
  • “Virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements.
  • Video interviewing software that evaluates candidates based on their facial expressions and speech patterns.
  • Testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test.

The new guidance reiterates the longstanding legal framework for determining whether a selection procedure violates Title VII:

(1)          Employers should assess whether any selection procedure has an adverse impact on the basis of a characteristic protected by Title VII -- that is, race, sex, color, national origin, or religion -- by comparing the selection rates of the different groups. If the selection rate for one group is “substantially” less than the selection rate for another group, then the process may have adverse impact. 

(2)          If a selection procedure has adverse impact on the basis of a protected characteristic, the employer must show that the procedure is job-related and consistent with business necessity.

(3)          Even if an employer shows that a selection procedure is job-related and consistent with business necessity, it may not use a procedure that has adverse impact if there is a less discriminatory alternative available. 

The EEOC’s guidance addresses whether employers can be held liable for adverse impact caused by AI that was developed by a vendor. The answer is “yes.”  

[E]mployers that are deciding whether to rely on a software vendor to develop or administer an algorithmic decision-making tool may want to ask the vendor, at a minimum, whether steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII. If the vendor states that the tool should be expected to result in a substantially lower selection rate for individuals of a particular race, color, religion, sex, or national origin, then the employer should consider whether use of the tool is job related and consistent with business necessity and whether there are alternatives that may meet the employer’s needs and have less of a disparate impact. . . . Further, if the vendor is incorrect about its own assessment and the tool does result in either disparate impact discrimination or disparate treatment discrimination, the employer could still be liable.

Although the EEOC’s guidance does not break any new ground, it provides a timely refresher on the Uniform Guidelines on Employee Selection Procedures and reminds us that the same rules apply to any selection method. Thus, employers should continue to monitor all of the tools and steps in their selection procedures for potential adverse impact, including tools that use AI.

You may also be interested in these recent Constangy posts about AI in the workplace:

Data privacy implications of ChatGPT

ChatGPT is coming for the workplace

EEOC issues guidance on use of AI in hiring (under the Americans with Disabilities Act)

Artificial intelligence in HR: A blessing and a curse

Our Affirmative Action Alert blog focuses on the latest news and topics affecting federal contractors and subcontractors and their compliance with affirmative action and other employment-related laws and regulations.  With breaking news, quick updates, and headlines on the Office of Federal Contract Compliance Programs and affirmative action issues, this blog is a great resource for in-house counsel, HR managers, and other compliance professionals.  Our blog is a companion to Constangy’s Affirmative Action newsletters, which address significant legislative, regulatory, and administrative proposals and changes.  Subscribe to both to stay current on these important topics!

Subscribe

* indicates required
Back to Page