Stephen Niemeier/Pexels.com

Living in an Extreme Meritocracy Is Exhausting

A society that glorifies metrics leaves little room for human imperfections.

A century ago, a man named Frederick Winslow Taylor changed the way workers work. In his book The Principles of Scientific Management, Taylor made the case that companies needed to be pragmatic and methodical in their efforts to boost productivity. By observing employees’ performance and whittling down the time and effort involved in doing each task, he argued, management could ensure that their workers shoveled ore, inspected bicycle bearings, and did other sorts of “crude and elementary” work as efficiently as possible. Soldiering—a common term in the day for the manual laborer’s loafing—would no longer be possible under the rigors of the new system, Taylor wrote.

The principles of data-driven planning first laid out by Taylor—whom the management guru Peter Drucker once called the “Isaac Newton … of the science of work”—have transformed the modern workplace, as managers have followed his approach of assessing and adopting new processes that squeeze greater amounts of productive labor from their employees. And as the metrics have become more precise in their detail, their focus has shifted beyond the tasks themselves and onto the workers doing those tasks, evaluating a broad range of their qualities (including their personality traits) and tying corporate carrots and sticks—hires, promotions, terminations—to those ratings.

But beyond calculating how quickly, skillfully, and creatively workers can do their jobs, the management approach known as Taylorism can be—and has been—“applied with equal force to all social activities,” as Taylor himself predicted. Today, Taylorism is nearly total. Increasingly sophisticated data-gathering technologies measure performance across very different domains, from how students score on high-stakes tests at school (or for that matter, how they behavein class), to what consumers purchase and for how much, to how dangerous a risk—or tempting a target—a prospective borrower is, based on whatever demographic and behavioral data the credit industry can hoover up.

In her illuminating new book, Weapons of Math Destruction, the data scientist Cathy O’Neil describes how companies, schools, and governments evaluate consumers, workers, and students based on ever more abundant data about their lives. She makes a convincing case that this reliance on algorithms has gone too far: Algorithms often fail to capture unquantifiable concepts such as workers’ motivation and care, and discriminate against the poor and others who can’t so easily game the metrics.

Basing decisions on impartial algorithms rather than subjective human appraisals would appear to prevent the incursion of favoritism, nepotism, and other biases. But as O’Neil thoughtfully observes, statistical models that measure performance have biases that arise from those of their creators. As a result, algorithms are often unfair and sometimes harmful. “Models are opinions embedded in mathematics,” she writes.

One example O’Neil raises is the “value-added” model of teacher evaluations, used controversially in New York City schools and elsewhere, which decide whether teachers get to keep their jobs based in large part on what she calls a straightforward but poor and easily fudged proxy for their overall ability: the test scores of their students. According to data from New York City, she notes, the performance ratings of the same teachers teaching the same subjects often fluctuate wildly from year to year, suggesting that “the evaluation data is practically random.” Meanwhile, harder-to-quantify qualities such as how well teachers engage their students or manage the classrooms go largely ignored. As a result, these kinds of algorithms are “overly simple, sacrificing accuracy and insight for efficiency,” O’Neil writes.

How might these flaws be addressed? O’Neil’s argument boils down to a belief that some of these algorithms aren’t good or equitable enough—yet. Ultimately, she writes, they need to be refined with sounder statistical methods or tweaked to ensure fairness and protect people from disadvantaged backgrounds.

But as serious as their shortcomings are, the widespread use of decision-making algorithms points to an even bigger problem: Even if models could be perfected, what does it mean to live in a culture that defers to data, that sorts and judges with unrelenting, unforgiving precision? This is a mentality that stems from Americans’ unabiding faith in meritocracy, that the most-talented and hardest-working should—and will—rise to the top. But such a mindset comes with a number of tradeoffs, some of which undermine American culture’s most cherished egalitarian ideals.

First, it is important to recognize that the word “meritocracy,” coined by the British sociologist Michael Young in his 1958 book The Rise of the Meritocracy, originally described not some idealized state of perfect fairness, but a cruel dystopia. The idea was that a society evaluated perfectly and continuously by talent and effort would see democracy and equality unravel, and a new aristocracy emerge, as the talented hoarded resources and the untalented came to see themselves as solely to blame for their low status. Eventually, the masses would cede their political power and rights to the talented tenth—a new boss just as unforgiving as the old one, Young suggested.

The new technology of meritocracy goes hand in hand with the escalating standards for what merit is. To hold down a decent job in today’s economy, it is no longer enough to work hard. Workers need brains, creativity, and initiative. They need salesmanship and the ability to self-promote, and, of course, a college degree. And they need to prove themselves on an ever-expanding list of employer-administered metrics.

A mindset of exhaustive quantified evaluation naturally appeals to elite tech companies like Amazon, where work life—judging from a New York Times exposé last year of the online retailer’s corporate culture—can be nasty, brutish, and often short. In the article, former Amazon workers described how supervisors would put them through relentless performance reviews across a wide range of measures. The critiques were so sweeping that they included the secret assessments of coworkers—who apparently weren’t above backstabbing with a confidential tip—and so savage that employees regularly cried at their desks. One human-resources executive summed up the office culture, approvingly, as “purposeful Darwinism.”

While Amazon’s fixation on metrics and merit, as described by the Times, was especially intense, it speaks to a variety of trends that have ramped up the competition in workplaces everywhere. At more traditional corporations like Accenture, Deloitte, and G.E., managers may have grown skeptical of the hidebound annual performance review, but many have simply replaced it with a more or less perpetual performance review—at G.E., reviews are conducted through a smartphone app. In the service sector, too, workers have found their daily lives meticulously regulated and continually assessed, from just-in-time scheduling at Starbucks to instant customer-service rankings at Walmart. In fact, Silicon Valley aside, alienation by algorithm may be a greater problem for this latter, less-educated group of workers: “The privileged,” O’Neil notes, “are processed more by people, the masses by machines.”

In the job search, the sorting is particularly relentless. Employers want a seamless work history and snazzy résumé, of course. But they want a spotless personal record, too—not just a clear background check, but also an online identity free of indiscretions. This is not just the case for tech workers in Seattle. Those further down the economic ladder are also learning that a good work ethic is not enough. They can’t just clock in at the factory anymore and expect a decent paycheck. Now they have to put on a smile and pray customers will give them five stars in the inevitable follow-up online or phone survey.

Some metrics are less official. One of the former autoworkers I interviewed for my book, for instance, found himself scrambling to find hours as a cashier at a department store. His manager told him that if he got more customers to sign up for the store credit card, he’d get more shifts. “It’s kinda like an unwritten rule,” he told me after getting fewer shifts than normal. “And that’s why maybe I’m on a ‘vacation’ as far as all this coming week.”

Some might point out that these sorts of individual incentives, however harsh they can be at times, are a good thing: People should strive to be the very best they can be, and efficiency should be a priority. Hiring the best workers is good for a company’s profits, and good for its consumers as well, who benefit from improved customer service and higher-quality, lower-cost things to buy. Customers want their Amazon packages on time—if it takes someone sobbing at his desk in Seattle to ensure that, so be it.

But Americans also believe—or at least they like to teach their children—that life is not merely a competition. From the days of the Puritans, they have found ways to temper their zeal for meritocracy, self-reliance, and success with values of equality, civic-mindedness, and grace, a surprising harmony of principles that the country’s earliest observers lauded as distinctly American. When society fetishizes measurement and idolizes individual merit at the expense of other things, however, it reinforces a go-it-alone mentality that is ultimately harmful to those egalitarian ideals.

Indeed, the desire for an efficiency achieved through a never-ending gauntlet of appraisals is unhealthy. It exhausts workers with the need to perform well at all times. It pushes them into a constant competition with each other, vying for the highest rankings that, by definition, only a few can get. It convinces people—workers, managers, students—that individual metrics are what really matter, and that any failure to dole out pay raises, grades, and other rewards based on them is unfair. And it leads the better-off to judge those below them, honing in on all the evidence that tells them how much more they deserve than others do. In this way, “objective” models provide socially-acceptable excuses to blame certain people—most often, the poor and people of color—for a past that, once digitally noted, is never really forgotten or forgiven.

This judgmental attitude can turn inwards as well. The expansion of metrics means the prevalence of evidence that convinces many workers that they are inferior in some fundamental way. For the unemployed workers I spoke with in my sociological research, failing to meet the growing array of criteria to get a good job often meant, in their minds, failing at life. There was shame and, sometimes, self-blame. Replaying past decisions in their minds—about schooling, finances, careers—plunged them into depression and even thoughts of suicide. They were “losers,” as one worker put it, in a society that values winning at all costs.

This is a much different environment for workers than the one that existed after World War II, when unions held sway, the government was willing to intervene more boldly in the markets, and income inequality was much less stark. In my research I have spoken with dozens of unemployed American and Canadian autoworkers, and, as Taylorized and regimented as work on the assembly line has long been, the union jobs they used to hold were in other ways very different from those of Uber drivers, TaskRabbit runners, and the legions of other workers in the growing gig economy, a lonely crowd of contractors monitored by apps and other modern-day versions of Taylor’s time-and-motion studies.

Unions, with their seniority-based hierarchies and ironclad employment protections, counteract systems of merit that are meant to hire, fire, and promote based on individual skill alone. Today, a little over a tenth of American workers are union members, but in the 1950s, that share was a third. Unions ensured that a large swath of the country’s workforce back then received decent incomes, generous benefits, and job security—boosting wages even in workplaces that were not unionized, as employers competed for workers or sought to preempt organizing drives. In turn, high pay for low-skilled work—the sorts of wages that Henry Ford and a number of other corporate leaders of his day were willing (albeit under pressure from organized labor) to pay—helped build a vibrant and broad-based middle class. Even workers without a bachelor’s degree—which today still describes two-thirds of the population over age 25—could secure this prosperity.

It’s not just that unions helped raise incomes for ordinary workers. It's also that the various ways they fettered the market’s invisible hand meant that the cost of losing the meritocratic race was not so devastating. The union practices that many people find noxious nowadays—how they protect workers who don’t “deserve” better pay, for instance—also help make sure that economic inequality does not run rampant—that those who falter along the way, as so many do, don’t fall into poverty.

As I’ve written about before, the loss of the good jobs that once supported families and entire communities has had terrible consequences for America’s working class. But more widely, this loss also speaks to the dying out of an alternative vision of egalitarian progress—the idea that Americans could band together to win a better life for everyone, not just the highest-skilled workers. Today, however, the traditional labor movement has been all but crushed, and is no longer around to provide a counterweight to corporate power. Perhaps not by coincidence, the middle class is now dwindling in size.

Critics argue that there is no going back to the broadly shared prosperity that existed before technological change and open markets ripped apart the old social contract that offered job security in exchange for hard work. To some extent that may be true, but anti-union and anti-government politics have also been part of this story of decline, and even today there remain tangible policies that can alleviate some of the economic pain. To his credit, President Obama has recently spoken out about the need to pursue such policies and otherwise prepare America’s workforce for the massive losses of good jobs—even those held by the well-educated middle class—that technologies like artificial intelligence may usher in. Already, some professional workers have found aspects of their highly paid positions outsourcedautomated, and contracted away.

The rationale for these corporate decisions, as always, is the desire for higher productivity and profits. But even if the greater efficiencies that result mean society as a whole is better off, at least in the long term, that is little consolation to those who lose their livelihoods and whose families must cope—often in this country on their own—with the painful, all-too-personal consequences of economic dislocation. To a troubling but oft-ignored extent, one person’s inefficiency is another person’s good job.

As much as America’s workers need help, however, policies will only change if culture does too. A norm of constant performance reviews focuses people’s energies not on structural reforms but on the ups and downs of their individual ratings, scores, and tallies. It turns some into modern-day Pharisees—expecting perfection, despising failure, excusing nothing—and deepens the despair of those they scorn. “You can’t make no mistakes,” one of the unemployed workers I interviewed told me. “You got to do everything perfect. You can’t get into trouble. You can’t do nothing. You got nobody to run to.”

In the final accounting, this unbalanced culture serves no one: not the ambitious corporate employee stifling his empathy in order to clamber over coworkers, and certainly not the unemployed worker ostracized by a society that judges her to be a failure. But doing something about this will demand more than a technical solution. It will require challenging deep-rooted notions of what success is and more leverage on the side of workers—as well as, perhaps, a measure of grace.