People Analytics and AI: 5 Steps to Minimize Risk and Maximize Reward in Employment Decisions
Author
The world of artificial intelligence (“AI”), big data, and predictive analytics presents great opportunities for employers to improve the quality and efficiency of their operations, including in employee selection procedures. Just as AI has revolutionized corporate marketing and advertising techniques, it promises to significantly change the future of employment decisions. But failure to exercise proper precautions can present serious risk that carries severe legal consequences. Employers looking to leverage AI selection tools in their employment decisions should be mindful of the risks associated with hidden bias and commit to diligently monitoring their technology systems.
If you’re thinking about implementing an AI selection tool or have already begun the journey, here are some initial steps you should take:
- Scrutinize the data to avoid and address hidden biases.
Employers often adopt AI selection tools with the hope and expectation of reducing bias in employment decision-making. Indeed, many vendors promise to “scrub” or “remove” bias from decisions by replacing subjective with objective metrics. But AI tools may contain hidden biases due to limitations in the data or the construction of the algorithm, or both. Data sets may be flawed as the result of limited sample sizes or the disproportionate representation of a single group, and the overreliance of an algorithm on prohibited or inherently discriminatory distinctions may cause bias in the outputs.
In the initial stages of adopting an AI tool, employers should take proactive steps to ensure the quality of the data and mitigate future claims that biased data were used. This process begins with identifying appropriate sources of data. When your algorithms rely upon historical data that have produced dubious results in the past, the selection tools will only perpetuate biases when used as the basis for future decision-making.
The next step is to think critically about the team that will be constructing and regulating the AI tool. When the team tasked with overseeing the AI tool lacks diversity – whether demographically or in life experience or background – the risk of bias in the output increases. To mitigate these concerns, employers should endeavor to assemble a diverse team to scrutinize data sets and assumptions and monitor the outputs for bias.
Finally, employers should be careful about training an AI tool on a homogenous data set. When a tool is fed data from all white male incumbents, employers should not be surprised when it prefers candidates that are white males.
- Conduct a proper validation study.
While a selection tool may utilize neutral criteria for evaluating candidates, the results may have a disproportionate impact on protected groups. Federal and state laws prohibit employers from using a selection tool that has a disparate impact unless the tool is “job related and consistent with business necessity.” The federal agencies tasked with policing employment discrimination – the Equal Employment Opportunity Commission (“EEOC”) and the Office of Federal Contract Compliance Programs (“OFCCP”) – follow the Uniform Guidelines on Employee Selection Procedures to determine whether the selection tool is job related and consistent with business necessity.
The Uniform Guidelines proffer three methods for demonstrating job relatedness. These methods are called “validation” studies and involve a statistical analysis of the operation of the tool. Conducting a validation study not only helps insulate the tool from legal challenge, but it also assists in determining whether the tool is accurate, effective and performs as expected. Conducting a validation study early in the process helps an employer tailor the tool to the needs of the company, while increasing the legal defensibility of the tool.
- Continuously monitor the tool and conduct follow-up adverse impact analyses.
Employers adopting AI selection tools often are tempted to “set it and forget it.” But by their very nature, AI tools are constantly evolving. Thus, employers must make sure to create processes for constant monitoring of the tool and its outputs. Diligent employers will regularly conduct adverse impact analyses to minimize the risk that the tool has a disparate impact on protected groups (and to address any such instances of disparate impact).
- Ensure accessibility of the AI tool.
Because of the automated nature of AI selection tools, they typically reside online. Employers should craft a plan for making their tools accessible to applicants with disabilities. Website accessibility claims are on the rise, and such claims may spread to online or computer-based selection tools if employers do not take the necessary steps to accommodate applicants with disabilities.
- Consider data security and applicable data privacy laws.
The use of big data in employee selection procedures necessarily requires the compilation of often personal, sensitive, and private data relating to employees or applicants. Employers utilizing AI tools should evaluate the security of their data, as well as that collected and compiled by any third-party vendor. In addition to outside threats, employers should minimize the risk that a current employee will misuse the data by limiting access to it.
Further, employers operating in the European Union should be aware of the General Data Protection Regulation (“GDPR”), which regulates how personal data can be processed. The GDPR defines personal data expansively and is more protective than U.S. law. The GDPR provides individuals the right to object to the automated processing of personal data, and requires employers to provide notice to and obtain consent from individuals before such processing. Individuals have the right to access, correct, and delete their personal data. Thus, employers using AI tools in the EU must take additional steps to be compliant with GDPR.
No doubt, people analytics will continue to revolutionize human resources and the methods for sourcing and selecting employees. Employers, however, should not blindly adopt AI selection tools without taking the above precautions to minimize legal risk and, indeed, improve the efficiency and efficacy of the tools to meet their individual needs.
Want to learn more on this topic? Nathaniel Glasser is presenting at the 2019 People Analytics & Workforce Planning Conference in Miami, Florida (click the link to attend in-person or virtually). Nathaniel is also taking questions from the audience on the legal implications of people analytics and AI tools on employment decisions. If you have a question you’d like to ask him, submit now it to alan.mellish@hci.org .
Author