Should Computers Decide Who Gets Hired?
When it comes to reviewing job applications, humans are relatively bad at selecting the best humans.
Anyone who has ever looked for a job knows that sometimes connections can trump qualifications. That’s why networking—despite its awkwardness— has become such a highly touted skill. Knowing someone who knows someone could mean finding out about a job before it’s publicly posted, or better yet, finding someone who can put in a good word or review an application himself. Many people hate this, because it is perceived to be unfair. But do these personal and subjective assessments ultimately result in better hiring?
A new paper from the National Bureau of Economic Research says quite the opposite: Relying on a “feel” for a candidate—as opposed to objective qualifications—makes managers’ hiring decisions worse.
The paper’s authors, Mitchell Hoffman of the University of Toronto, Lisa B. Kahn of Yale, and Danielle Li of Harvard, say that, at least in theory, there are two ways that managerial discretion could go. It could be better: These decision-makers are privy to better, more comprehensive information about job candidates than tests and resumes can provide, thus they wind up hiring people who stick around longer and perform at a higher level. Or, it could be worse, with managers instead injecting their own biases or irrational preference into the decision making processes, thus making worse decisions for their firms.
In order to figure out which scenario is more likely, the researchers introduced job testing to 15 firms who recruit for low-skilled service jobs, things like call-center and data-entry workers. The tests included not only questions about technical and cognitive skills, but also questions to assess personality and overall fit for the firm. The companies hired HR managers and gave them the test results (coded as green for best scores, yellow for moderate, and red for lowest) but also gave them the ability to factor in other qualities that they thought might make for good hires. Essentially giving them the go ahead to pick candidates with lower scores if they thought that they’d in fact be good candidates. That’s when things went off the rails.
According to the study, when managers used their discretion to override the hiring order implied by the test results (by hiring an applicant who perhaps had a score in the yellow range, when one in the green range was available) the outcomes, of both tenure and productivity, were worse.
So does this research mean that hiring should be relegated to algorithms?
Implementing job tests resulted in new hires who stuck around 15 percent longer than those who weren’t tested, suggesting that tests help managers find candidates who are a better fit. But hiring at many levels is much more complicated than that, especially when considering the various ways to measure the success or worth of workers and firms.
Humans can certainly exert bias and illogical preferences, which can color hiring practices when managers use personal discretion to sift through applicants. A preference, or conversely, aversion to a particular school, gender, sexual orientation, race, or ethnicity—whether conscious or not—can become a huge problem. And there have been plenty of studies that show such biases can impact the way hiring decisions are made.
But on the other hand relegating people—and the firms they work for—to data points focuses only on the success of firms in terms of productivity and tenure, and that might be a shallow way of interpreting what makes a company successful. Firms that are populated only by high-achieving test takers could run the risk of becoming full of people who are all the same—from similar schools, or with the same types of education, similar personality traits, or the same views. And that could potentially stall some of the hard work being done in the effort to introduce more diversity of every kind into companies all over the country. And that type of diversity, too, has been proven to be an increasingly important factor in overall firm performance.
For their part, the researchers realize that their findings shouldn’t result in all hires being solely based on test performance. Instead they suggest that if hiring managers are given only limited control over how frequently they choose to override test results in favor of other applicant characteristics, algorithms can help prevent bias. The real challenge, it seems, will be finding the right balance.