Introduction
âHowâm I doinâ?â was the trademark question of Ed Koch, mayor of New York City during the 1980s. In posing his question, Mayor Koch sought reassurances regarding the depth of his political base and feedback on his stewardship of municipal operations.1 As veteran mayors and city managers know, these two dimensions of municipal leadership are not independent of each other. Just as political waves can rock an administration, weak operational stewardship can undermine political strength and stability.
The performance of municipal operations from A to Zâanimal control to zoning enforcementâcan affect the political health of mayors and city council members, as well as the professional well-being of city managers and other appointed administrators. Unless they or trusted assistants are âminding the store,â executivesâ political or professional stock can quickly decline. Yet all too many officials have only a vague sense of how their own municipal operations are faring. âHow are we doing?â is a question that should be askedâand deserves to be answered.
PERFORMANCE COMPA RED WITH WHAT?
For decades, municipal officials have been urged to measure performance and have been given advice on how to get started. Those who have followed these instructions have soon discovered that most performance measures are virtually valueless if they appear in the form of isolated, abstract numbers. The value of most performance measures comes only in comparison with a relevant peg.
Several options for comparison exist. Authorities have long suggested that current marks could be compared with those from earlier periods, with other units in the same organization, with relevant outside organizations, with pre-established targets, or with existing standards.2 In theory, such comparisons provide municipal officials both an internal gauge, marking year-to-year progress of a work unit or highlighting unit-to-unit performance differences, and an external gauge, showing how a municipalityâs operations stack up against other jurisdictions or professional ideals. The internal gauge has been adopted more broadly than the external gauge. For more than a few decades, many local governments have routinely reported year-to-year comparisons of their own departmentsâ performance indicators, although often these have been merely workload measures, showing increasing or declining quantities of various outputs. Relatively few local governments made unit-to-unit comparisons within their own municipality, and prior to the 1990s very few reported external comparisons with standards or with performance indicators from other jurisdictions. Then, in the mid-1990s, changes began to occur, spurred principally by the urgings of the Governmental Standards Accounting Board (GASB), benchmarking successes in the private sector, and the emergence of a handful of projects designed to yield reliable interjurisdictional performance comparisons.
GASB encouraged cities to measure their âservice efforts and accomplishmentsâ and, where possible, to compare their results with other cities. Portland (OR) and a few other cities began to do so and publicized their reports. Meanwhile, groups of cities established cooperative projects to collect and share performance statistics. A project administered by the International City/County Management Association (ICMA) with participants from coast to coast was established in 1994, followed by several others focusing on selected cities within individual states and Canadian provinces; for example, projects emerged in North Carolina, South Carolina, Tennessee, Florida, Michigan, Utah, and Ontario.3 Each project grappled with the challenges of matching services and agreeing on uniform measures. Some devised elaborate cost-accounting systems to ensure comparability of unit costs. As various problems were resolved, the participating cities reported comparative statistics and, in some cases, began to use comparative information to reduce costs or improve services.4
WHY SO FEW EXTERNAL COMPARISONS?
Despite progress in many cities, those municipalities that make no performance comparisons with other service providers far outnumber the municipalities that do. Why have not more cities joined cooperative benchmarking projects, and why have so few individual municipalities attempted external comparisons on their own? Perhaps, in some cases, it is because local government officials are happy with the status quo, preferring not to have the performance of their organization judged in comparison with others. After all, only in Garrison Keillorâs fictitious village of Lake Wobegon can everyone claim to be âabove average.â5 In real life, only about half can achieve that status, and, for the rest, the desire to know oneâs ranking must be motivated not by a yen for publicity and praise but instead by a yearning to improve. Sadly, many of those jurisdictions mired at the bottom of the performance scaleâthe ones, they might say, that make the upper 90 percent possibleâmay prefer to remain oblivious to the numbers that would document their status.
In contrast, some local government officialsâincluding those who have joined formal projects or developed their own comparative statisticsânot only are willing to engage in external comparisons but are eager to do so. Driven by a desire to climb into the ranks of outstanding municipal performers or to be recognized for already being there, these officials desire external benchmarks but often are disappointed by finding no more than a handful having any usefulness to them. Those hoping to spur greater accomplishments by urging their organization to keep up with the âmunicipal Jonesesâ frequently are disappointed to find that few relevant statistics on the Jones family even existâat least in a form usable for this purpose. When the most that the city manager can discover about the performance of other cities is a set of workload measures and expenditure levels, few useful comparisons are possible.
If good comparison statistics are so hard to find, why not rely instead on municipal performance standards set by professional associations and others? It sounds simpleâand in some cases, it is possible to make such comparisons against standards.6 Unfortunately, however, many so-called standards are vague or ambiguous, have been developed from limited data or by questionable methods, may focus more on establishing favored policies or practices than on prescribing results, or may be self-serving, more clearly advancing the interests of service providers than service recipients. Furthermore, most standards are not widely known and often are difficult to track down. There had been no single repository of standards relevant to municipal operations prior to the first edition of Municipal Benchmarks in 1996. Mayors, council members, city managers, department heads, other officials, and citizens wishing to sort well-developed and usable standards for a particular operation from the restâor even to find those standardsâpreviously faced the prospect of an often difficult and time-consuming search. Understandably, few persevered.
MEETING THE INFORMATION NEEDS OF LOCAL GOVERNMENT OFFICIALS AND CITIZENS
Limited time and competing demands leave few local government officials or interested citizens in a position to track down applicable standards or suitable interjurisdictional performance indicators as external gauges of local performance. The desire to make external comparisonsâor a lack of that desireâis not necessarily the issue; time, resources, and other practical constraints may simply constitute too much of a hurdle, if a reservoir of relevant comparison information is not readily at hand.
That is where this volume comes into play. Between the covers of this book lie standards and comparison statistics intended for ready use by local government managers, elected officials, and citizens. Countless contacts with local government officials, professional associations, and trade organizations, coupled with careful scouring of budgets and other performance-reporting documents from local governments across the United States, along with several from Canada, have produced a collection of municipal benchmarks that will enhance a communityâs ability to answer the question, âHow are we doing?â
Included in the pages that follow are standards, norms, and rules of thumb offered by professional associations, trade organizations, and other groups with a stake in local government. Readers are cautioned hereâand will be reminded elsewhereâthat the motives of such groups in prescribing standards may range from civic-minded service to self-serving protection of the status and working conditions of association members. It is also important to realize that, in some cases, standards may be intended to imply minimum acceptable levels of performance, in other cases to designate norms, and in still other cases to identify targets toward which local governments should aspire.
Collections of actual performance indicators have been gleaned from the documents of more than 250 local governments of various sizes scattered across the continent. The set of municipalities chosen for this project does not constitute a random sample; each municipality was included because it measures performance and because its reporting documents were readily accessible or otherwise made available. Most of the municipalities included were selected from lists of longstanding recipients of budget presentation awards from the Government Finance Officers Association (GFOA) and recipients of special recognition from the International City/County Management Associationâs (ICMA) Center for Performance Measurement. The performance statistics of these GFOA and ICMA award recipients were supplemented by reports from other cities that were also found to document performance in a manner conducive to cross-jurisdictional comparisonâincluding some noted for having adopted âstatâ systems modeled after New York Cityâs Compstat and Baltimoreâs CitiStat as the centerpiece of their performance management efforts.7
Where performance indicators from actual cities conform in format to the standards promulgated by professional associations, they offer readers a âreasonableness reviewâ for those standards. Actual performance of respected cities may influence the interpretation attached by the reader to a given standardâthat is, whether it should be regarded as a minimum acceptable level, a norm, or a target of excellence. The performance records of recognizable and respected cities will help answer the practitionerâs pragmatic questions:
âIs everybody else meeting this standard?â
âIs anybody meeting it?â
CONTENT, FORMAT, AND USES
Following this introduction is a second chapter that encapsulates a few of the major messages of prominent books, articles, and how-to manuals on performance measurement for local government. Although these publications instruct local officials on the development of ideal performance measures, they typically offer few, if any, relevant comparisons for use once those measures are crafted and operating statistics compiled. This book takes a different approach.
Following brief instructions on the development of good performance measures, the focus of this volume turns toward the interpretation of performance information once it has been collected. Officials who have devoted a year or more to the development of a set of performance measures and the collection of relevant data may be disappointed if they have difficulty finding jurisdictions reporting the same measure in the same fashion. Understandably, they might be frustrated to learn that they must collect the measure a second year before any relevant comparisons can be madeâand then only with their own cityâs performance in the earlier year. With the information in this volume, officials will be able to make more immediate comparisons that will place local performance in an external context.
Chapter 2 also introduces benchmarking as a management tool. It briefly describes benchmarking concepts and the different forms of benchmarking found in the public sector.
Following the chapter on performance measurement and benchmarking are 32 chapters devoted to performance standards and cross-jurisdictional performance indicators for the major functions of municipal government. The concluding chapter, âThe Value of Benchmarks,â offers final thoughts on uses and benefits of comparing performance either with standards or with the results achieved by strong performers.
A FEW COMMENTS BEFORE PROCEEDING
The set of benchmarks found in this volume can be a useful tool for gauging and improving municipal performance. Like other tools, the benchmarks are most effective in the hands of a craftsperson who not only knows how to use them but also understands their limitations.
On Benchmarks and Benchmarking
Comparing local performance statistics with selected benchmarks is a valuable step in evaluating municipal operations, but a simple comparison is not as definitive as a formal performance audit, a program evaluation, or another form of rigorous analysis, perhaps based on the âbest practicesâ variety of benchmarkingâan approach to benchmarking often practiced by private corporations and described more fully in Chapter 2. A simple comparison of local performance with selected municipal benchmarks is more limited and less precise than these other options, but it is also quicker and less expensive. It is a more practical form of benchmarking for a general assessment of a broad range of functions. Such a comparison p...