Shutterstock.com

After Federal Officials Sent Letters to Over-Prescribing Docs, Prescriptions Fell and Patient Safety Rose

A small office at GSA is helping agencies apply behavioral science to reap big improvements in program effectiveness.

The Centers for Medicare and Medicaid Services recently analyzed data for antipsychotic drugs prescribed to elderly patients and found that some doctors were over prescribing these potent and costly drugs. So CMS sent letters to the high-volume prescribers, informing them of how their prescription rates compared with those of other doctors in their state. That single action—sending the comparison letters—reduced prescriptions by 11%, thereby saving money and improving patient safety, CMS found. 

CMS undertook this initiative with the help of a small office in the General Services Administration that specializes in the use of data analytics and behavioral science techniques, such as the use of comparison letters. 

As I’ve noted in previous blog posts, behavioral science is a growing field that is still somewhat diffuse in nature. It is international and multidisciplinary, and its use has rapidly evolved over the past several years in the public sector. This growth is in tandem with the growth of evidence-based government, data and analytics, rapid cycle testing, and pressures to improve customer experience with government services.

In the U.S. federal government, these different threads intersect in GSA’s  Office of Evaluation Sciences. This small office was created in 2015 to provide a cadre of talent to help agencies use these new techniques to get better results from their programs.

Interestingly, this office preceded the adoption of the Evidence Act earlier this year, which will create an even greater demand for its specialized talents as agencies are pressed to develop their own evidence and evaluation strategies, which also include the use of behavioral science techniques. For example, the Labor Department has already developed a guide for its operational bureaus on how to use behavioral interventions to improve their programs. 

The Team 

The Office of Evaluation Sciences is a multidisciplinary team that blends a range of professional disciplines comprising the field of behavioral science. These include psychology, economics, political science, ethnography, statistics, and program evaluation. Under the leadership of Kelly Bidwell, the office conducts work that spans behavioral science, evidence, and evaluation. It supports agencies, for example, in carrying out the Office of Management and Budget’s implementation guidance for the recently-passed 2018 Foundations for Evidence-Based Policymaking Act.

The office is located in GSA’s Office of Governmentwide Policy and has a staff of about 15-20 specialists who are a mix of career civil servants and rotational staff from academia or nonprofits serving one- to four-year terms. Staff members typically oversee two to four projects at a time.  Office director Bidwell says the use of rotational staff keeps the career staff connected to cutting edge intervention design techniques such as appropriate sample size, evaluation design and analytic techniques. And since staff are federal employees, they have greater access to federal administrative data sets for analyses than would academics or other non-federal researchers.

The OES team’s approach is to undertake rapid-cycle projects, using low-cost solutions (e.g., redesigning a notification letter). Their core deliverables are actionable results to drive better programs and policies. All projects are posted and summarized on the office’s website.

What They Do  

Agencies approach OES for help conduct projects that require expertise that they may not have on their own staff. OES typically works on 20-30 projects at a time with a wide range of agencies to help clarify identified problems (e.g., define the gap between a program’s goal and reality in order to identify the key trip points), test interventions (often using randomized control trials and large existing data sets) and, where successful, help agencies determine how to scale the pilot to a larger population.

According to Bidwell, many of the OES team’s solutions are inexpensive and can be implemented relatively quickly, based on six- to 12-month trials. Their proposed interventions typically don’t require legislation, regulatory changes, or significant funding. Where possible, they like to conduct large-scale testing using federal administrative data, develop rigorous findings and results, and use evaluation techniques. Their approach is experimental—typically iterative, and trial-and-error. Oftentimes, their solutions involve changing the way a program is described, the timing, or the sequence of choices being offered.

Bidwell says her team likes to work in partnership with agencies with the goal of transitioning ownership of the project to the agency partner. Over the long run, Bidwell says, the staff hopes to create an appetite for using behavioral and analytic techniques.  

Actions taken by OES’s agency clients might vary from scaling up a successfully-tested intervention to advice on reorganizing administrative data so it can be used to answer questions or retest a successful intervention on a different population. So far, the staff has found agencies are more reluctant to change a program’s design (such as changing default settings on application forms) than they are to make small changes (such as fine-tuning the presentation of information). However, they hope to generate evidence on the effects of more substantial changes in the near future. 

A Range of Projects 

What kind of projects does OES undertake with different federal agencies? Team members work across the government to provide end-to-end support in the design of an evidence-based programmatic change and test the change to measure its impact. Bidwell says that sustaining such change is more effective when the OES team collaborates with internal agency champions who drive the process; participate in the design and implementation of an evaluation; assist in the analysis and interpretation of results; and make decisions about scale and program implications.

Recent projects OES has undertaken span a number of policy areas, such as:

Bidwell says that lessons learned in one program are sometimes transferable to programs in other agencies. 

Potential Next Steps 

OES staff have created informal networks of peers across agencies. They currently leverage existing and new networks, such as agency performance improvement officers, chief data officers, and chief evaluation officers. As the use of behavioral insights matures across government, there may be a collective effort to create a more formal network of peers. This has happened in other fields, such as among risk managers, evaluation experts, and strategic foresight practitioners.

Another potential future step might be the creation of a “playbook” of behavioral insights for use by agencies. Bidwell, however, says that the work done by OES and other federal agencies may not have created a sufficient critical mass of evidence to warrant a playbook, especially given the breadth of different behavioral techniques. In the interim, the office has begun to develop a series of more technical “method guides.” In addition, it holds an annual event with federal agency collaborators each fall to share what they have learned and the results from every project completed the previous year. Finally, as it completes more projects and inventories efforts from other agencies, it has begun to identify those techniques on which to focus future attention, and how to effectively implement them in a federal context. For example, OES has identified some of the clearance barriers for modifying agency forms as it has helped agencies improve their design and wording.