Management Matters Management MattersManagement Matters
Practical advice for federal leaders on managing people, processes and projects.

The Fake News Era Demands Ethics in Government Communication

Good government requires good communication. Only when the public receives timely and relevant facts about government activities can transparency and accountability be achieved. The public servants who provide that information—agency spokespeople, communications specialists, speech writers, social media experts—are a critical link in supporting our democratic processes and helping citizens access services.

Sadly, we have seen too many instances lately where elected officials and high-profile government communicators have failed in this regard.

Almost daily, President Trump uses Twitter to fuel a bombastic narrative, mixed with concepts of “alternative facts” or claims of “fake news,” to drive his agenda. Add to that a nearly complete absence of fact checking on the part of the White House, and it is no wonder that when the White House press secretary recently corrected misinformation she had provided, many were left thinking, “it’s about time.”

A recent Pew Research Center report, “Public Trust in Government: 1958-2017,” shows trust in government remains near historic lows, with just 18 percent of respondents indicating they trust government in Washington to do what is right always or most of the time.

If that level of distrust is applied to communication about government, where then does the public...

Metrics Are for Playmakers

The General Accountability Office reconfirmed a pervasive management problem in its new report in the Managing for Results series. The title defines the problem: “Governmentwide Actions Needed to Improve Agencies' Use of Performance Information in Decision Making.” This is not a new theme for GAO but apparently their past recommendations have had minimal impact. Despite what seems to be continuous attention, the use of metrics has declined, not improved, over the past decade.

As recently as 2016, a column on the Federal News Radio website focused on efforts by the Performance Improvement Council “to bring together tools and people to take better advantage of technology and data” and to “start to think about putting real-time performance data in the hands of federal managers in more of a more customizable and user-friendly ways?” Creating “opportunities to use performance data, especially for program managers” was described as “really the sweet spot.”

Those quotes are damning.  Metrics have been required since 1993 when the Government Performance and Results Act required agencies to develop performance plans with measurable goals. In 1997, John Koskinen, then the Deputy Director for Management in the Office of Management and Budget, testified that experience under GPRA “confirmed that virtually...

Thinking Inside the Sandbox

It is perhaps the defining challenge for regulators in most industries today: Figuring out how to apply old laws to new technologies. The financial services industry poses a particularly acute example of this dilemma: much of the legislative framework governing the financial industry was designed before computers and the internet, let alone blockchain and Bitcoin. This is a problem for businesses as much as it is a problem for regulators. It can be hard, if not impossible, to launch an innovative product when there is uncertainty as to how the product will be regulated.

There’s a potential solution to this conundrum, however. Financial regulators in several jurisdictions have used what’s known as a regulatory sandbox approach to help reduce uncertainty for innovative businesses.

In a regulatory sandbox, businesses are eligible for the relaxation of specific regulatory requirements so that they may test new products in a real-world environment, albeit with more hands-on supervision and defined limits to protect consumers. The idea of a federal-level regulatory sandbox seems to have caught on recently among federal regulators, so it’s worth asking: What would a federal regulatory sandbox look like, and what challenges would regulators have in designing or running...

It’s Possible (And Dangerous) To Be Over-Inclusive

  • By Khalil Smith, Heidi Grant and Kamila Sip
  • September 11, 2018
  • Leave a comment

Organizations have rightly started making diversity and inclusion top priorities. And accordingly, managers have become more sensitive about who they hire, promote, and assign to projects. They’ve also become more sensitive to sharing information equitably among their staff, and worked harder to give people the right amount of exposure within the department or organization.

This progress is massive, but it has left some collateral damage—namely, wasted time, money, and energy due to a hidden drain on productivity: over-inclusion.

By looping in too many people to various emails, meetings, and projects, organizations risk job satisfaction, retention, and both the quality and timeliness of employees’ work. In order to avoid these pitfalls, leaders must master the art of expectation matching in order to more thoughtfully exclude.

Over-inclusion, defined

We can think of inclusion as following an inverted U-shaped curve. That is to say, too little inclusion is a problem, such that we can reasonably call it “under-inclusion.” This is what we’re most familiar with: the feeling of being left out, minimized, or excluded.  

Then there is the ideal amount of inclusion. It’s when the right people know the right information at the right time. Over-inclusion is being a...

Who In Government Is Asking ‘What Works?’

One year ago this week, the Commission on Evidence-Based Policymaking issued its final report, The Promise of Evidence-Based Policymaking, which made a number of substantive recommendations about how to strengthen the governance, collection and use of evidence in the way the government develops, implements and evaluates its programs. A year later, where do things stand?

While there is still precious little use of rigorous evidence in decision-making across government, the good news is that substantive progress has been made on some of the commission’s recommendations. For example, the Trump administration is driving the development of “learning agendas” through the President’s Management Agenda. Learning agendas offer agencies an approach to answering the big questions they have about what’s working and what’s not. The government also has set a cross-agency performance goal to better “leverage data as a strategic asset to grow the economy and increase the effectiveness of the federal government.”

These are good first steps, but there’s still a lot more we can do, starting with the appointment of chief evaluation officers in federal agencies. The commission recommended the federal government “identify or establish a chief evaluation officer in each department to coordinate evaluation and...