Artificial Intelligence In Hiring: A Growing Employment Discrimination And Legal Menace?
The EEOC's lawsuit against iTutorGroup Inc. brings to the fore the legal and ethical challenges of using AI in hiring
Posted on 09-01-2023, Read Time: 5 Min
Share:

This case serves as a poignant reminder of the significant ethical and legal questions surrounding AI in hiring. In this introductory exploration, we delve into the evolving landscape of AI's role in employment, uncovering its potential benefits and the pressing concerns it raises for both employers and job seekers alike.
We reached out to legal and compliance experts who shared their insights into the implications of using AI in hiring and the takeaways from the US Equal Employment Opportunity Commission’s first-ever AI discrimination in hiring lawsuit. In this lawsuit, the former reached an agreement with the latter, who allegedly programmed its recruitment software to automatically reject older applicants.
Using AI in Employment-Related Decisions Can Lead to Title VII Discrimination Claims
![]() |
Erika Collins, Partner, Faegre Drinker |
The use of AI programs to make employment-related decisions creates a risk for employers to be subject to Title VII discrimination claims. AI programs are complex machines that can potentially generate biased results against protected groups of people.
Employers are becoming vastly dependent on AI programs to make decisions in hiring, promotions, terminations, and monitoring employee performance. The risk of bias, in conjunction with employers’ increased dependence on AI programs, is leading to greater regulations and potential liabilities.
The EEOC has provided guidance that AI programs will trigger Title VII discrimination violations when protected classes are disfavored in employment selection processes.
iTutorGroup agreed to pay $365,000 to a class of over 200 applicants over 55 years old that was allegedly passed over because of their age. The settlement resolved the claims brought by the EEOC in a May 2022 lawsuit against a group of companies that provide online English-language tutoring services to students in China.
The EEOC alleged that iTutorGroup used AI software that screened out female job applicants over the age of 55 and male job applicants over the age of 60 who applied to work as an online tutor with iTutorGroup. The EEOC alleged that, in using this software, iTutorGroup automatically rejected over 200 job applicants in 2020 solely due to age. The iTutorGroup lawsuit is part of the broader EEOC push (as reflected by the recent EEOC guidance on the use of AI) to target and eliminate hiring practices that, among other things, rely on AI tools or machine learning that adversely impact or intentionally exclude protected groups.
This settlement is the first of its kind where the EEOC has settled with a company accused of using AI tools that discriminate against applicants in hiring, but it likely will not be the last. Companies around the world continue to increase their use of AI in hiring and to support HR-related activities, and I expect that an increasing number of lawsuits targeting AI hiring bias are on the horizon — whether brought by agencies such as the EEOC or by individual employees through private legal counsel.
-------------------------------------------------------------------------
AI Tools Often Fail to Account for Indirect Aspects of Disability or Age
![]() |
Kemper Patton, Labor and Employment Attorney, Brooks Pierce |
Employers have to be careful when using artificial intelligence tools during the hiring process given prevailing civil rights laws protecting applicants from unlawful discrimination. The ADA and ADEA, in particular, are of serious concern because, oftentimes, there is no way of knowing whether AI tools are able to account for — and therefore, disregard — more indirect aspects of a particular disability or older age.
Take, for example, an applicant who struggles to make eye contact because they are blind (blindness generally being considered a qualifying disability under the ADA). If the AI tool accounts for an applicant’s ability to maintain eye contact during an interview, will it be able to disregard an applicant’s failure to do so due to their visual impairment? Employers have to be knowledgeable of how their AI tools make these decisions and analyze the data they obtain because employers cannot simply rely on any guarantees by AI vendors that their AI tools comply with anti-discrimination laws.
The EEOC has been aware that AI and hiring are a hotbed of employment discrimination concerns for some time now (the EEOC first released guidance on AI and ADA implications back in May 2022). However, the EEOC’s recent settlement with iTutorGroup evidences just how serious the EEOC is about enforcing anti-discrimination laws in the AI context.
The settlement sum alone ($365,000.00) illustrates how significant these actions can be. This case should not be seen as a one off but rather as the beginning of a new front the EEOC will use to fight what it perceives to be unlawful employment discrimination. As the use of AI becomes more prevalent in the workplace, employers have to be cognizant of the implications of prevailing anti-discrimination laws.
-------------------------------------------------------------------------
There’s a Growing Focus on AI Discrimination in the Workplace
![]() |
Marissa Mastroianni, Employment attorney, Cole Schotz |
The recent settlement is a reflection of the EEOC’s growing focus on discrimination concerns arising from employers’ use of artificial intelligence in the workplace. In May 2023, the EEOC issued a technical assistance document, which discusses the application of long-standing anti-discrimination principles to an employer’s use of artificial intelligence when making employment decisions.
In light of the EEOC’s actions, many employers are asking what should they do next. I suggest considering three main risk mitigation measures:
- Employers should carefully review the technical assistance document to ensure compliance;
- Verify that any automated tools do not directly or indirectly ask questions that would elicit an applicant’s age, race, religion, or other protected characteristics.
- Employers should consider conducting audits on their automated tools with their legal counsel to further ensure compliance with applicable laws and also to increase the possibility that the audit findings may then be protected under attorney-client privilege.
Overall, the settlement is a strong reminder that employers must ensure that any artificial intelligence used in making employment decisions does not result in intentional or unintentional discrimination. In its press release regarding the settlement, EEOC Chair Charlotte A. Burrows made it very clear that the EEOC will not allow an employer to rely upon the automated nature of the technology as an excuse. Specifically, employers can still be held liable for discrimination even if the employer did not intend to exclude applicants or employees based on their protected characteristics.
-------------------------------------------------------------------------
Technology-Assisted Screening Process Must Comply with Civil Rights Laws
![]() |
Rachel See, Senior Counsel, Labor & Employment, Seyfarth Shaw LLP |
As more employers use AI in hiring, or are considering using it, they should be mindful that the civil rights laws already on the books apply to the results of their hiring processes, whether they are the product of human judgment or made with the assistance of technology.
Employers seeking to use AI in hiring should have a clear understanding of both the benefits they are hoping to achieve through the use of technology and the risks associated with that technology and the way they’re planning on using it.
Automatically rejecting older job applicants when their birthdates are already known does not require any sort of artificial intelligence or machine learning.
The iTutor settlement and the EEOC’s ongoing emphasis in the area of AI and algorithmic bias serve as a strong reminder to employers that the results of any technology-assisted screening process should comply with existing civil rights laws. This reminder applies to both complicated and simple technology. It applies whether an employer is using cutting-edge artificial intelligence products or if its recruiters are simply setting filters on a spreadsheet. A robust compliance and risk management program should periodically evaluate how technology, both sophisticated and simple, is being used in the hiring process to ensure compliance and manage other risks.
Error: No such template "/CustomCode/topleader/category"!