
Unemployment is low. Turnover costs are high. Application and selection processes are increasingly automated. HR departments are stretched thin. These are among the many reasons that employment tests, including skills assessments, cognitive ability tests, and personality profiles, are popular among employers today.
Tests offer an attractive solution to many employers’ talent acquisition problems. Commercially available tests and test developers offer evidence that pre-employment testing streamlines the selection process, helping an employer identify the best qualified candidates quickly. They tout their success rates at decreasing turnover and saving employers precious dollars associated with hiring and training new employees. The ease of use has never been stronger than it is today, with applicants able to take tests online, employers able to receive results rapidly and electronically, and results integrating seamlessly with applicant tracking systems. These benefits, however, are accompanied by a great deal of risk if careful thought isn’t put into the selection and use of a test.
When considering a test, its validity should be the highest priority question: Does this test measure what it is supposed to measure? There are three types of validity commonly associated with employment testing:
- Content or construct validity looks at whether the test relates to the knowledge, skills, and/or abilities that are needed for the job. It often stems from a job analysis and a logical process wherein subject matter experts identify appropriate test items.
- Concurrent validity correlates the responses of test takers with the responses of known experts or high performers in the job.
- Predictive validity compares initial test results to later job performance to determine if the test properly predicted the test taker’s future success in the job.
Intertwined with validity is the concept of adverse impact or disparate impact. Adverse impact occurs when a test disproportionately screens out a particular group of people, based on a statistical comparison of the demographics of applicants who take the test and applicants who pass the test. It is particularly concerning when the group being screened out shares a protected characteristic such as race, gender, or age. Adverse impact can be acceptable, though, if the test is valid – measuring what it’s supposed to measure – and no similarly valid test exists that does not have adverse impact.
Legal Risk for Employers
Validity and adverse impact together represent an employer’s greatest legal risk when using a test. There are two schools of thought on test validation and adverse impact. The first, more conservative approach, is to validate a test prior to using it. Validation can be expensive and time-consuming, especially if you’re developing a new test. In this scenario adverse impact analyses must still be conducted, but a statistically significant difference in test results among the test takers is less concerning because you have already determined your test is valid. The employer’s obligation, then, is only to determine if another valid test exists without adverse impact.
The second approach is to use a test without confirming its validity and conduct regular adverse impact analyses to determine if any protected classes are being disproportionately rejected for employment due to the test. If there is no adverse impact, the validity of the test is less concerning from a legal standpoint, saving an employer both time and money. The risk is higher, though, if adverse impact does exist. At that point selection decisions will have already been made and candidates who have already been rejected may be successful if they file a discrimination claim based on the test. If adverse impact exists, it becomes necessary to either validate the test or discontinue using it.
Handling a test that has adverse impact can be tricky. In 2009 there was an illustrative case, Ricci v. DeStefano, in which the City of New Haven used a test to select employees for promotion to lieutenant or captain firefighters. After administering the test, the City decided to disregard the test results and leave the positions vacant because Blacks/African-Americans scored disproportionately lower than whites. The whites sued, saying they were denied the promotions due to their race. The Supreme Court agreed that the City’s decision not to promote the whites due to the test results was unlawful discrimination on the basis of race.
Considerations when using tests
Still, many employers will decide that the benefits of using a pre-employment test outweigh the risks. If faced with the decision to use a test, consider:
- Validity: Will you validate the test specific to your organization or rely on validation studies from the test vendor? Ensure the validation studies are specific to your industry, your geography, and the jobs for which you intend to use the test.
- Accessibility: Determine whether the test is accessible to individuals with a variety of disabilities, including cognitive disabilities, and what reasonable accommodations you might make for an applicant unable to take the test due to a disability. Additionally, is your workforce multi-lingual and will the test be available in multiple languages?
- Process: For which positions do you intend to use the test? Has it been validated for all of them? At what stage of the application process will you present the test? Both your time-savings in the selection process and your legal exposure due to volume of applicants rejected could vary significantly depending on whether you offer the test during the initial application process or only to your top few candidates.
- Interpretation: How will you calculate and interpret the results? If it will be handled by in-house staff, rather than being automated or handled by a vendor, what training will your administrators receive? Will you take steps to establish interrater reliability if multiple people will be interpreting the results?
- Consistency: What procedures will you put in place to ensure the test is administered and decisions are made consistently across applicants? A valid test can be a powerful tool in proving the non-discriminatory nature of your selection decisions, but using it differently across applicants, or not at all, undermines its utility. A tale of caution: An employer used a test of manual dexterity for selecting applicants for small assembly jobs. Over the years, recruiters noticed that men were less likely to pass the test than women, presumably due to the size of their fingers. Recruiters began screening out men early in the selection process, without testing them, based on the assumption that they would fail the test. Adverse impact emerged, a government agency got involved, and it became clear that the employer was unlawfully discriminating on the basis of gender. Had they continued using the test, the disparate selection rates between men and women may have been justified.
- Monitoring: If any rejections are automated based on test results, how will you involve humans to periodically verify that the system-driven logic is functioning properly? Is the test continuing to add value (e.g. streamlining the selection process, reducing turnover) as the number of selections you make using the test result increases? How often will you conduct adverse impact analyses? How will you monitor whether talent acquisition staff is administering the test and using the results correctly?
With careful consideration, a test can add value to the selection process. Without careful consideration, a test can create legal liability, administrative burden, and bad publicity for an employer. Don’t underestimate the importance of careful test selection, user training, and monitoring of outcomes. Even in our highly-automated society, human research and oversight can make all the difference.
Copyright © 2018 HR Works, Inc.