Shutterstock.com

The Evidence-Based Policy Revolution Waiting to Happen

Government decision makers need to know more than whether a program works. They also need to know whether it's cost-effective.

Is the evidence-based policymaking movement at the federal level of government on the cusp of an important change? It’s too soon to say for sure, but hopefully so.

The change would expand the questions that currently drive the evidence movement, such as Does this program work? and Which version of this program works best? to include questions like What does this program cost? Is this program cost-effective? and Which version of this program is most cost-effective?

That may seem like a subtle change, but providing decision makers with information on costs and cost-effectiveness, not just on how well they produce desired outcomes, would give them valuable new information to identify the programs and policies with the highest return on investment. In practical terms, that would allow them to spend scarce government resources more wisely and better achieve the goals of public programs.

Today, cost information is sometimes woven into program evaluations, particularly large and well-funded ones, but it’s not the standard. Its absence leaves big gaps in our knowledge. Consider, for example, a literacy program that is found, through rigorous evaluation, to be more effective than the status quo at helping low-income students read better. Should federal policymakers encourage scaling the program up? Or should state and local policymakers adjust their budgets to implement it?

Without cost data on the program as well as for alternative ways to improve educational outcomes, it’s tough to know. Maybe scaling the program up would be a wise use of government funds. Or maybe the program is a bit more effective than the status quo but costs twice as much, using up resources that would be better spent on other education needs. Or maybe there are alternative, more cost-effective ways to improve reading outcomes for low-income students.

Today, decision makers might have to guess about the answers to those questions. That’s why there’s no substitute for robust cost analyses integrated into program evaluations.

What will it take to bring about that evidence revolution? An important place to start is with federal agencies. By requiring agency-funded program evaluations to assess costs, agencies can greatly enhance their usefulness. Over time, the results of those evaluations will help shine a spotlight not just on what works, but also on how much effective programs cost.

That is the step the research arm of the Education Department has taken over the past two years. Its Institute of Education Sciences now requires that a cost component be part of an increasing share of the research and evaluation studies it funds.

Asking more of program evaluators comes with new obligations for agencies. That includes ensuring not only that cost evaluations are adequately funded but also that tools are available to help researchers do that type of analysis efficiently. It’s why the Institute of Education Sciences has been funding several resources to make cost analysis easier for education researchers. Examples include a free online database of unit costs for common inputs needed to implement educational programs and policies, as well as a toolkit being developed to make cost analysis easier.

Congress also has a role to play. It can send a signal to agencies that they need to incorporate questions about cost-effectiveness into their learning agendas, which are now required of large departments by the Foundations for Evidence-Based Policymaking Act.

Moreover, the federal government can look to states and localities for inspiration. Since 2010, 27 states and 10 counties have joined the Pew-MacArthur Results First Initiative, which helps jurisdictions use evidence-based approaches—including cost-benefit analysis—to inform their policy and budget processes. New Mexico’s Legislative Finance Committee, for example, analyzed a range of policy areas to inform state leaders about which programs had the highest return on investment. The results led to investments of more than $400 million in cost-effective programs, including increased spending on pre-Kindergarten education.

Today, setting a new expectation that program evaluations include cost analyses would be a big step forward in helping federal, state and local leaders make better-informed decisions. That includes decisions about whether to adopt new programs or policies, which program strategies to pick, and whether to continue current investments.

It would also help promote accountability by providing policymakers, researchers and the public with better understanding of how well, and how cost-effectively, programs are being implemented. With benefits that would accrue to both taxpayers and those served by government, this could be an evidence revolution well worth the cost.

Andrew Feldman is a director at Grant Thornton and served as a special adviser on the evidence team at the White House Office of Management and Budget in the Obama administration.

Rebecca Maynard is professor emeritus of education and social policy at the University of Pennsylvania’s Graduate School of Education. She previously served as a senior official at the U.S. Department of Education’s Institute of Education Sciences.