AI In HR: Evolving Legalities And Concerns
From bias detection to data security: Legal imperatives in AI integration
Posted on 03-28-2024, Read Time: 6 Min
Share:
Highlights:
- Employers are increasingly relying upon AI to make employment decisions at every stage of the job life cycle.
- AI tools could inadvertently narrow an employer’s candidate pool over time as they skew toward recruiting from one university or professional association over another.
- Using AI tools in employee hiring and evaluations (and during other parts of the employment lifecycle) also raises concerns about data privacy and security.

Employers are increasingly relying upon AI to make employment decisions at every stage of the job life cycle. These technological advances yield corresponding legal risks. Two such use cases for AI and their legal implications are discussed in this article.
Use of AI in Hiring and Employee Evaluations
Generative AI tools like ChatGPT are regularly used to scan job applications, screen candidates and create employee evaluations – processes that any human resources professional will attest to can be time-consuming. Federal and state civil rights laws lag behind the growing use of AI in employee recruitment and hiring.Neither the Equal Employment Opportunity Commission (EEOC) nor federal law regulates the use of AI for these purposes, for now. Few, if any, courts have definitively addressed this issue. President Joe Biden’s administration has only recently signaled, through a series of Executive Orders, its intent to regulate this issue.
The use of AI tools in hiring might disproportionately impact one racial or gender group over another, as machine learning prompts an employer’s candidate selection software to favor a specific skill or facial feature over another.
Similarly, AI tools could inadvertently narrow an employer’s candidate pool over time as they skew toward recruiting from one university or professional association over another. The use of any selection procedure or AI tool that has an adverse impact on the hiring, promotion or employment of members of a particular race, sex or ethnic group is discriminatory and could expose employers to EEOC complaints and other legal action.
Notably, some vendors advertise that their AI tools are EEOC - or OFCCP- (Office of Federal Contract Compliance Programs) certified without having such a certification. Employers must be mindful of this and should closely review generative AI tools to ensure they do not have a disparate impact on individuals in protected classifications, including, but not limited to, race, gender, ethnicity, national origin and age.
AI, Data Privacy and the Workplace
Using AI tools in employee hiring and evaluations (and during other parts of the employment lifecycle) also raises concerns about data privacy and security. AI tools are developing rapidly, and, like any internet platform or software, there is no guarantee of data security.Proprietary or confidential information about an employer’s practices, employees or other aspects of a business, even if not retained by the platform, might end up in the wrong hands in the event of a data breach or other cybersecurity event.
Similarly, because ChatGPT and other tools “learn” from each conversation or bit of information loaded to their platform, employees using AI to complete tasks in the workplace may be inadvertently accessing another person or business’s trademark, copyright or intellectual property, creating legal risk for employers.
Employer Considerations
Ultimately, the successful use of AI in employee hiring, recruitment, retention and termination requires employers to consistently evaluate and monitor software platforms and vendors to avoid actual or perceived bias that can more easily translate into material for workplace discrimination or other claims. Employers might consider the following steps to safeguard themselves and their places of work:▪ Review and update employee handbooks. Most employers have an employee handbook or other document that sets forth the company’s workplace policies and expectations for employee conduct. Employers should review their employee handbooks to ensure the provisions governing the use of computers or other software, the confidentiality of company information, and the return of property upon leaving the company address information shared with generative AI platforms like ChatGPT.
▪ Review the terms of use. As noted above, generative AI tools do not guarantee data privacy and security. Employers and employees alike should avoid sharing any confidential or sensitive information with an AI tool. Employers should also review the terms of use for AI tools to ensure there are clear terms governing the protection of data shared with the platform.
▪ Be mindful of bias. To the extent an employer is using AI tools to streamline the employee hiring, evaluation and separation process, employers should regularly review results for evidence of bias. If bias is uncovered, employers should take immediate steps to address it. Employers should also reiterate their equal employment opportunity and/or diversity, equity and inclusion policies to ensure employees using AI tools in hiring have them top of mind.
Author Bio
![]() |
Shaniqua Singleton is a Partner at Nelson Mullins Riley & Scarborough LLP. She represents clients on a broad range of employment, commercial, and business litigation matters. Shaniqua is based in Atlanta. |
Error: No such template "/CustomCode/topleader/category"!