It’s necessary to comply with the Evidence Act, but it also provides a critical foundation for making the most out of the new law.
Implementation of the federal Foundations for Evidence-Based Policymaking Act is already underway, with large agencies required to submit an update to the Office of Management and Budget this month around the development of learning agendas, annual evaluation plans and capacity assessments. What can agencies do to strengthen a culture that values these new tools? An important first step—one that is cost-free and relatively easy—is to create an agency evaluation policy.
A small but growing number of agencies already have evaluation policies. They’re public documents posted on agencies’ websites that describe the principles agencies seek to follow when they conduct program evaluations. In fact, the Evidence Act, as it is known, requires each agency evaluation officer to establish and implement an evaluation policy that pertains to the entire agency, not just the evaluation office.
My advice: Don’t wait. An evaluation policy can be a foundation stone for further Evidence Act implementation. And for sub-agencies not directly subject to the law, evaluation policies are just as useful. Why? One reason is that, to quote OMB guidance on the Evidence Act, they “affirm the agency's commitment to conducting rigorous, relevant evaluations and to using evidence from evaluations to inform policy and practice.” That message is equally important for both internal staff and leadership and for external stakeholders and the public.
Another reason evaluation policies are useful: They help establish or strengthen relationships among evaluation staff, program operators and policy officials. In particular, the policies can help facilitate conversations around the evaluation process and about making evaluation as relevant as possible to program and agency leaders—whether it’s to inform program improvements, future funding announcements, or budget decisions or justifications.
Thankfully, agencies don’t need to reinvent the wheel. Several already have evaluation policies, including the U.S. Agency for International Development, the Labor Department and the Small Business Administration. The evaluation policy of the Administration for Children and Families, a sub-agency of Health and Human Services, is among the best known and provides an excellent template. It highlights five principles (with explanations paraphrased by me):
- Rigor: Committing to getting as close to the truth as possible. That means using the most rigorous methods that are appropriate to the evaluation questions and feasible within budget and other constraints.
- Relevance: Ensuring that evaluation priorities—the questions the agency is asking and answering with program evaluations and other types of evidence-building—are relevant to the program operators and policymakers who will be using the information.
- Transparency: Making evaluation methods and data easily accessible so that others can replicate and critique the work. Importantly, it also means making the results public and accessible regardless of the findings, and doing so in a timely manner. An increasing number of agencies go further by registering their studies and making their evaluation design plans public on their websites.
- Independence: Insulating evaluation functions from undue influence. As Naomi Goldstein, who leads the evaluation office at ACF, put it, “A broad range of stakeholders should be involved in deciding the questions. They shouldn’t be involved in deciding the answers. Those are empirical.”
- Ethics: Safeguarding the dignity, rights, safety and privacy of participants. There are many laws and principles related to research using human subjects, and the idea here is to comply with both the spirit and the letter of them.
In developing and using an evaluation policy, federal evidence leaders emphasize two steps. The first is getting broad buy-in and feedback across the agency, from program staff to leadership, when developing a policy. The second is to work hard to socialize the principles after they’re created—that is, find ways to make the principles widely known, such as by referencing them in conversations with programs or external stakeholders. A few years ago, one agency evaluation leader even ordered baseball caps for her staff (paid for out of her own pocket) with the agency’s evaluation principles emblazoned on them. That’s an unusually creative approach, and a good reminder that even well-crafted evaluation policies won’t do any good if no one knows about them.
The Evidence Act has many useful parts and will take years to fully implement. But by developing and publishing an evaluation policy sooner rather than later, agencies can lay the groundwork for next steps and clarify what principles will guide their work. That includes times when the winds blow in the opposite direction, whether it’s pressure to obscure negative findings or use less-rigorous methods. Evaluation policies are lines in the sand that say: “Our agency does evaluation right.”
Andrew Feldman is a director in the public sector practice at Grant Thornton and also hosts the Gov Innovator podcast. He served as a special adviser on the evidence team at the White House Office of Management and Budget in the Obama administration.