Public Policy

How Do We Improve the Process of Government Improvement?

March 18, 2015 3853599

DOL eval officeHave you read about past efforts by the United States federal government to improve its performance?  If so, you’ve encountered a jargon-heavy world, complete with an alphabet soup of initiatives:

  • Performance budgeting;
  • Planning-programming-budgeting systems (PPBS);
  • Management by objectives (MBO);
  • Zero-based budgeting (ZBB);
  • War on waste;
  • Total quality management (TQM);
  • New public management (NPM);
  • Government Performance and Results Act (GPRA);
  • Lean six sigma;
  • Program Assessment Rating Tool (PART);
  • “Rigorous” evaluation;
  • Stat movement;
  • High priority performance goals (HPPGs); and
  • GPRA Modernization Act (GPRAMA).

If you read through these bullets, you’ve reviewed the last 60 years of centrally directed efforts to improve the federal government’s performance.  Each initiative originated out of a sincere desire to use certain social science and business methods to address society’s problems. Too often, however, these efforts were conceived narrowly, driven by fad-fueled enthusiasm, or lacking in cumulative learning from past experience. Eventually, most were discarded. Many left uncertain legacies. It’s not clear that the efforts have improved the capacity of federal agencies to improve.

Groundhog Day—the Government Performance Version

This multi-decade series of efforts to improve government performance has been disjointed, to say the least. To a large extent, each iteration has drawn from a relatively narrow view and theory of how to improve. Under the thought leadership of central institutions including the Office of Management and Budget, the federal government has careened from an emphasis on performance measurement, to an emphasis on policy analysis tools, to belief in statistical approaches, to faith in goal-setting, to viewing randomized controlled trials as the preeminent form of program evaluation. History demonstrates that most efforts to improve the process of government improvement have suffered over time from a lack of coherence and continuity. Furthermore, a review of various literatures in the realm of government performance improvement suggests widespread disappointment with their success. Why? What’s going on, here?

In a forthcoming article from the American Journal of Evaluation, my co-author and I discuss what we believe to be part of the answer. In brief, what we might call the field of “government performance improvement” has suffered from tribal thinking and correspondingly narrow initiatives. Focusing on key terms that are prevalent now, performance measurement, data analytics, and program evaluation are all in the mix of efforts to produce evidence about public and nonprofit programs that can be used to improve public performance and enhance learning. Yet performance measurement, data analytics, and program evaluation have been treated as different tasks. Scholars and practitioners who focus on each of these approaches speak their own languages in their own circles. Even the diverse field of evaluation has been balkanized at the federal level in recent years. OMB has emphasized one kind of impact evaluation and tended to ignore (and even disparage and downplay) other types of summative and formative evaluation.

Recognizing and Overcoming Balkanization

This conceptual balkanization has been pervasive over time, both in practice and academic discourse. It also has had extensive ripple effects on institutions and the mind-sets of individuals who work in them. In our reading and experience, characterizing performance measurement and analytics as being distinct from evaluation has led to a persistent and pervasive separation among groups of people that has been costly in terms of both resources and organizational learning. In practice, an agency may contain an island or two of evaluators in their own shops.

In some agencies that have extensive administrative bureaucracies, there may be little emphasis on evaluation, per se, but instead shops that focus on analytics or operations research. Some agencies have also developed capacity in ‘‘policy analysis,’’ which oftentimes focuses on summative evaluation methods such as cost–benefit analysis and impact evaluation as preferred tools. In contrast, staff who are assigned the function of complying with the Government Performance and Results Act generally have focused on goal setting and performance measurement. Historically, the latter staff often have little training and experience in other evaluation and analytical methods. Each agency may have a unique constellation of these capacities with one-off success stories of integration, but typically these capacities have not worked together and in many cases have not been aware of each other.

Due in part to a sense of separateness among three groups of people—scholars and practitioners of (1) performance measurement, (2) the increasingly popular data analytics, and (3) the broad, multidisciplinary field of evaluation—a corresponding sense of separateness pervades among the constructs of measurement, analytics, and evaluation. However, we suggest that if performance measurement and data analytics were consistently viewed as parts of evaluation practice—parts that could benefit from insights from other evaluation practitioners—public and nonprofit organizations would be better positioned to build intellectual and organizational capacities to integrate methods and thereby better learn, improve, and be accountable.

Moving Toward a More Strategic Approach to Evaluation Within Organizations

If the goal of evaluation (including measurement and analytics) is to support achievement of an organization’s mission, we argue that the evaluation function within an organization should be conceived as a unified, interrelated, and coordinated ‘‘mission-support function,’’ much like other mission support functions such as human resources, finance, and information technology. Regardless of how evaluation related mission support is organized, distributed, and coordinated in a particular organization—which may vary reflecting organizational history, culture and stakeholder interests—this function could support general management of an agency, program, or policy initiative. Evaluation-related mission support could work closely along with other mission-support functions, albeit with an expectation that analysis, measurement, and other evaluation approaches will be valued as a genuine source of mission support rather than neglected or implemented piecemeal.

My co-author does not advocate for policy options, given the nature of his job. Consequently, I am speaking for myself when I argue that the conceptual arguments in our article have implications that call for action. Among other things federal agency leaders should:

1. Design a credible and “independent” evaluation function in a strategic and comprehensive manner to ensure the organizational location will support collaboration across offices. Note: not independent like Offices of Inspector General, but respected by yet not perceived to be co-opted by program management. The Chief of Evaluation Office at the Department of Labor provides a superb model.

2. Offer incentives to encourage program managers to ask for evaluation work to support learning and performance improvement, not solely for accountability. Senior executives should develop their divisions’ learning agendas that specify how and when evaluation support can further managerial learning objectives.

3. Design and empower the evaluation function to take a lead role in informing organizational leadership and management about the credibility of evidence for assessing both past performance and the prospective relevance of different kinds of evidence to inform and support learning, performance improvement, and decision making.


Kathryn Newcomer is the director of the Trachtenberg School of Public Policy and Public Administration at George Washington University.

View all posts by Kathryn Newcomer

Related Articles

Deciphering the Mystery of the Working-Class Voter: A View From Britain
Insights
November 14, 2024

Deciphering the Mystery of the Working-Class Voter: A View From Britain

Read Now
Doing the Math on Equal Pay
Insights
November 8, 2024

Doing the Math on Equal Pay

Read Now
All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture
Event
October 10, 2024

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

Read Now
Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions
Research
October 10, 2024

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions

Read Now
‘Settler Colonialism’ and the Promised Land

‘Settler Colonialism’ and the Promised Land

The term ‘settler colonialism’ was coined by an Australian historian in the 1960s to describe the occupation of a territory with a […]

Read Now
Daron Acemoglu on Artificial Intelligence

Daron Acemoglu on Artificial Intelligence

Economist Daron Acemoglu, professor at the Massachusetts Institute of Technology, discusses the history of technological revolutions in the last millennium and what they may tell us about artificial intelligence today.

Read Now
Crafting the Best DEI Policies: Include Everyone and Include Evidence

Crafting the Best DEI Policies: Include Everyone and Include Evidence

Organizations shouldn’t back away from workplace DEI efforts. Rather, the research suggests, they should double down, using a more inclusive approach that emphasizes civility and dialogue – one aimed at finding common ground.

Read Now
5 1 vote
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments