Shutterstock.com

A Federal Data Failure Is Making It Hard to Talk About COVID

Without a standard, trusted language of COVID data collection, it’s been hard to measure the disease, track its trend, and build effective policy.

“Can we talk?” comedian Joan Rivers always asked. When it comes to COVID-19, the answer seems to be no.

To talk about COVID, let alone act effectively to stop it, we need a common language that we all understand and on which scientists can base their work. But from the very beginning, the issue—and the language we’ve tried to use to talk about it—has been hyper-politicized, and that’s been a major impediment in the nation’s efforts to attack the virus.

When it comes to the language of COVID, the United States stands in sharp contrast with the rest of the world. The Germans have their Robert Koch Institute—the country’s version of the Centers for Disease Control and Prevention—and its reports are a model of clarity and precision and political neutrality in nailing down the problem. 

In the United Kingdom, there’s an up-to-the-minute dashboard of cases, hospitalizations, and the death rate, with the data broken down by region. Australia, likewise, has an easy-to-read “BeCovidSafe” dashboard that tracks the virus. In Canada, there’s a handy outbreak update. Japan has its COVID tracker powered by data from the prefectural governments, and Korea’s website builds on data from the country’s Central Disease Control Headquarters. In all these cases, the building blocks of data come from the government, and they drive the public debate. 

In the United States, by contrast, the COVID language problem has been muddled from the beginning. The New York Times is reporting daily trends and hot spots based on data from county governments. For the Washington Post, data comes from the paper’s reporters and from the notable Johns Hopkins University COVID-19 dashboard, whose numbers in turn are compiled from a vast array of local and state public health departments. Then, of course, there’s the University of Washington COVID model, which builds on the Johns Hopkins Github, and the University of Texas COVID-19 Modeling Consortium, which has its own methodology

This is one of the most important things making COVID-19 different in the United States: driving the American debate are the analyses being put together by non-governmental sources. 

The CDC has its own dashboard, but it’s been used mostly by public health professionals, compared with the data on which the media have relied. And its data have been the subject of rising political controversy as the Trump administration has tried to bypass the agency’s data collection role and, at least for a while, took down the CDC’s public information on the virus. That further muddled the data challenge in tracking COVID-19 and increased the politicization of the data process.

Moreover, there have been surprising inconsistencies in the data strategy along the way. Experts compiling the media analyses are keeping track of what they call numerous “reporting anomalies” that make it even harder to speak in a common language about COVID-19. In late June, New Jersey shifted from reporting confirmed to probable deaths. New York City, on June 30, released a large number of reports of deaths from previous periods, but didn’t identify when they occurred, and that produced a big spike that made trends hard to measure. Kansas started reporting its count only on Mondays, Wednesdays, and Fridays, while in Florida a major public squabble over the state’s dashboard led its designer to claim she was fired.

The government’s central role in collecting the data and building the data superstructure in other countries certainly hasn’t boiled away the politics. The question of wearing masks has roiled Britain, and the Germans are debating how and whether to reopen schools. But, for the most part, in most other countries, government professionals have been responsible for collecting the basic data on which the political decisions have been made. Neutral competence has been the foundation of the COVID debate almost everywhere—except in the United States. And without a standard, trusted language of COVID data collection, it’s been hard to measure the disease, track its trend, and build effective policy. 

There’s surely great value in creating different models to assess the virus and predict its future. In a disease with so much uncertainty, the more different perspectives we can get on it the better. But, at the core, the United States has been profoundly crippled by the challenge of putting together the most basic information about what’s happening, in a way that ensures everyone is speaking the same language.

The United States built its public service—and the world’s premier public health system—on the principle of neutral competence. Since the passage of the Pendleton Act in 1883, the core principle of American bureaucracy has been that it sought to provide politically unbiased information to policymakers and to deliver politically unbiased services to citizens. The fierce political battles that have raged since the beginning of the COVID outbreak have undermined this principle, crippled our response, and put our people at greater risk. 

We’ve had lots of debates about how the American response compares with other nations. But nothing is more fundamental than the effort, since the beginning, to push aside any effort to develop a common language with which to assess and guide policy. Now, as we debate the best steps to tackle the flareup in the virus around the country, we find our efforts undermined. 

So “can we talk?” The answer, sadly, is no, because we don’t have a common language for the conversation. 

Donald F. Kettl is the Sid Richardson Professor at the LBJ School of Public Affairs at the University of Texas at Austin, and he is the author of The Divided States of America (Princeton University Press, 2020).