Quality Matters
eBook - ePub

Quality Matters

Seeking Confidence in Evaluating, Auditing, and Performance Reporting

  1. 332 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Quality Matters

Seeking Confidence in Evaluating, Auditing, and Performance Reporting

About this book

Information--regular, systematic, reliable--is the life-blood of democracy and the fuel of effective management. Surely today there is no problem with information, for this is the age of information overload. It pours onto our computer screens and out of our printers. Indeed, many governments claim, often with some justification, to be more open and transparent than ever before. But what if the life-blood is contaminated, or the fuel polluted? Then the body politic sickens and the engine of public management runs rough. It is the vital issue of the quality of the information we receive that this book addresses. Quality Matters compares approaches across different jurisdictional settings and across three different types of information evaluation. The chapters describe and analyze quality assurance in a number of countries and within a variety of international organizations. These have been selected either because they are widely considered to be leaders in evaluating information or because they have experience with assuring quality information that can instruct others. Contributors are from Australia, Canada, the European Union, France, the Netherlands, New Zealand, Sweden, Switzerland, United Kingdom, United States, and the World Bank. This pioneering study analyzes practices for assuring the quality of evaluation, performance auditing, and reporting in the face of political, organizational, and technical obstacles. A final chapter addresses the extent to which quality assurance systems become bothersome rituals or remain meaningful mechanisms to ensure quality control. This well-structured volume will be of particular interest to policymakers and adds much to the literature on program evaluation and performance auditing.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Quality Matters by John Winston Mayne, John Winston Mayne,Robert Schwartz in PDF and/or ePUB format, as well as other popular books in Politics & International Relations & Politics. We have over one million books available in our catalogue for you to explore.
Part 1
Evaluation
2
Devising and Using Evaluation Standards: The French Paradox
Jean-Claude Barbier
In European comparative terms, France’s history in the evaluation domain is relatively short (Monnier 1992). In 1990, however, it was generally thought that a breakthrough had been achieved and that evaluation had been so-to-say “institutionalized” for good (“institutionalization” would entail that significant institutional bases had been set up and that decisive steps had been achieved in the process of social construction of evaluation, as a professional activity as such).
Evaluation practice subsequently grew considerably (or, rather practices calling themselves “evaluation”) in a context of continuous controversy between different conceptions and of increasing stimulus provided both by the European Union and the process of devolution to regional authorities (Conseils RĂ©gionaux). Nevertheless, no homogeneous “evaluation community” has emerged in France and no set of professional guidelines, standards or norms for assessing evaluation has ever been formally adopted by any significant group of professionals.
On the other hand, the period 1990–96 has effectively seen the dissemination and usage of norms by the Evaluation Scientific Council (CSE, Conseil scientifique de l’évaluation). And this is somehow paradoxical.
After briefly explaining the historical context and the main reasons why we think the status of evaluation in France has not yet been stabilized, we will present three French “paradoxes.” One paradox is that—in a meta-evaluation perspective—we are able here to reflect on and review CSE’s experience, using its case studies and demonstrating how criteria were built and used to assess the quality of a small number of evaluation studies, the piloting of which was under its supervision—at central state level.
The final section attempts to establish how this valuable legacy might influence the future developments of standards in France, in the context of renewed institutions and the development of new actors, including the French Evaluation Society (SFE).
The present analysis draws on several sources. One is the analysis of the reports published by the CSE (see references). Another source consists of interviews conducted with former members of the CSE as well as members of regional bodies in three regions. A series of systematic interviews was also scheduled by a working group of the SFE in 2001, in the context of contributing to its board’s strategic guidelines for the future. At the time, the author was both a member of the group and of SFE’s general secretariat—positions that provided extensive opportunities for “inside” observation.
Institutional Background: Why Quality Assessment and Institutionalization are Linked
The chances that efforts to achieve the adoption and widespread use of common norms and standards will succeed in the French context are primarily dependent on structural, institutional, and political factors.
From the late 1980s, evaluation emerged again on the French political agenda, after a rather short-lived experiment in the late 1960s and early 1970s. At that time, inspired by American practice (Monnier 1992; Spelenhauer 1998) under the name “Rationalisation des Choix BudgĂ©taires (RCB),” evaluation practice was introduced in the Finance ministry and piloted by its strategic economic studies department (Direction de la prĂ©vision). The rather “scientistic” approach that expected to set up a system able to really rationalize all public expenditure decisions was eventually abandoned in 1983. From that time, it has been a constant French feature that evaluation was never directly related to the budgetary process.
The 1970s experience—although generally considered a failure—was certainly not without impact, inasmuch as it contributed to altering public management frames of reference, at least among limited state elite circles. Duran et al. (1995: 47) rightly record that the 1986 Commissariat GĂ©nĂ©ral du Plan’s so-called “Deleau” report drew its inspiration from analyses derived from the “limits of the welfare state,” and, as such, was not alien to the RCB experiment. Deleau et al. (1986) insisted on an orthodox and rather strict cost efficiency approach. The tone was totally different when Prime Minister Rocard embarked on an initiative to “modernize public management” in France. Rocard’s directive encompassed four main orientations. One of them was the transformation of human resources management in the public sector, and evaluation was a second. Fontaine and Monnier (2002) stress the fact that this important symbolical act used a rare window of opportunity to promote evaluation in a country altogether alien to it.
In January 1990, the Evaluation Scientific Council was set up by presidential decision as an advisory body to the cross-departmental evaluation committee (CIME—comitĂ© interministĂ©riel de l’évaluation) created at the same time. CIME was in charge of deciding what evaluations were eligible for funding by a special central government fund. From the start, this meant that only some evaluations, agreed upon at cross-departmental level, were going to be, so-to-say, at the center of the stage of “institutionalized” evaluation. At the same time, all shades of studies and research, as well as audits and inspections were being devised and implemented independently, under the freshly popular name of “evaluation.” A considerable number of conferences and reflections was sparked off at the time.
CSE’s legal mandate encompassed methods and ethics and it was supposed to control the quality of particular CIME evaluations. As expected from the French political and administrative systems, CSE was composed of two main types of experts with approximately equivalent representation: academics and Grands Corps—that is, top civil service members (belonging to the national statistical institute [INSEE], audit courts or inspection units). In its first composition, CSE also had one private sector member: Jean Leca, an internationally known professor of political science, chaired the Council. A few years later, when new members were nominated, top civil servants formed a slight majority. After a promising start in the early 1990s, as the legal selection procedure of evaluations functioned smoothly, the process gradually ceased to exist. In 1995, the government stopped initiating evaluation projects bringing CIME activity to a halt. Accordingly, CSE was sidelined, and the prime minister’s office abstained from choosing a new president. As departing members completed their mandate, no new members were nominated.
This explains why the body of meta-evaluation we are able to analyze here consists only of a handful of operations. In the period 1990–96, a little less than twenty evaluation projects were analyzed by CSE, out of which less than fifteen underwent the complete process of assessments (we analyze thirteen of them). CSE’s grasp of evaluation practice in France was thus at the same time limited and centralized. The scarce quantity of evaluations it was able to assess is nevertheless in reverse proportion to its prominent importance in terms of establishing quality standards.
In parallel, from the early 1990s, a number of regional authorities embarked on regional programs of evaluation. The passing of a new regulation in 1993, which made evaluation compulsory for the Contrats de plan Etat-RĂ©gions (joint central state/regional contracts), gave a clear impetus to regional involvement in evaluation. Only some regional authorities then embarked on introducing systematic evaluation and set up special committees—sometimes involving partnerships with regional state administration representatives (Pays de Loire, Brittany, for instance). Involved in commissioning evaluations, designing their procedures and steering their processes, these bodies have also developed limited practice in the area of quality assessment. In a handful of cases, their practice drew upon CSE’s parallel activity of constructing standards and norms. Individuals sometimes played key roles in using CSE experience in constructing regional “doctrines” of evaluation (as in the cases of Pays de Loire and Brittany). Nevertheless, the contribution of these regional committees has certainly remained secondary and very informal.
Administrative Tradition, Multi-Level Governance and an Emerging Market
These developments ought to be understood in the particular French institutional context. One of its essential characteristics is a very uncertain approach to “accountability.” As Perret (CSE 1993: 72) rightly observes, the notion in French is rather shakily established. A French structural feature has been the centrality of the state and the notion that it embodies the general public interest. This explains why top civil servants along with academics—who, incidentally, are also top civil servants—were bound to play a central role in the new “institutionalization” phase from the late 1980s. It should also be stressed that central government in France still commands a “quasi-monopoly” in matters of policy analysis and evaluation expertise, although of course a significant part of studies is outsourced. In empirical terms, the quasi-monopoly was recently described in a special report commissioned by the French Senate, comparing the U.S. and French situations (Bourdin 2001), after a senatorial mission to the United States. The Ministry of Finance, INSEE (the national statistical agency), the central audit and control agencies (Cour des Comptes, Conseil d’Etat) dominate the policy analysis field (Perret, CSE 1993: 76). Political scientists have comprehensively analyzed this situation, most particular to France, which has successfully withstood marginal efforts to introduce more pluralism from the 1970s on (Jobert and Theret 1994). Jean Leca, CSE’s former president, stressed that no specific social actors had emerged in the early 1990s to engage in independent evaluation (Leca, CSE 1992: 2). The absence of an organized profession leads to an embedded de facto eclecticism, in terms of references and standards, which blurs the frontiers between evaluation and other activities (research, consulting, audit, and control) (CSE 1992: 13). In such a context achieving some form of consensus on quality assessment within the various evaluation milieus is very difficult indeed.
In a situation combining a quasi-monopoly of state expertise and the absence of a profession, the driving force was bound to be on the demand side of the evaluation market. This demand is pushed by two factors pertaining to the growing influence of multilevel governance. On the one hand, EU-level practice and its general “standards” have played an increasing role, notably because EU-level programs all include the explicit implementation of evaluation regulations (European community’s structural funds). In many areas of traditional social policy the EU is the dominant buyer of evaluation studies. However, in the complex relationship existing between member states and the EU Commission, complying with formal regulations may lead to highly disparate types of studies: we contend that there is, as yet, very little spill-over effect from the EU quality assessment practice to the French debate1 (Barbier 1999). “Mainstream evaluation” (if such a notion is meaningful in the French context) is thus implemented by “evaluators” who have a limited grasp of the international state of the art. In some cases, such evaluators explicitly consider that there is no reason for acquiring such knowledge and professional experience. Very typically, as we interviewed him, the chief executive of one of the significant consultant firms among the medium-size operators, was completely unable to formulate an answer to what evaluation was and could not identify its contribution to the firm’s turnover or cash-flow.
Conflicts of Conceptions and Advocacy Coalitions in the “Jacobin” Context
Conflicting views as to what evaluation actually is have significantly prevented the achievement of a consensual approach to quality in the domain of evaluation, where the French have a special mark. As Duran et al. (1995) have noted, they harbor more controversies about the notion of evaluation than actual practice of it. A typical and enduring controversy opposes “managerial” evaluation to “democratic” evaluation. Although there is obvious analytical substance in the distinction, the long-lasting debate verges on the absurd and is to be related to the uneasy institutional context described above. A third paradigm, namely “pluralistic” evaluation, has tried to eschew the opposition between “democratic” and “managerial,” with limited success so far.
For its fiercest critics, managerial evaluation roughly fits into a more or less neo-liberal agenda, trying to pass as politically neutral. Its only purpose is deemed by them to be cost-cutting. On the other hand, democratic evaluation is often seen by its proponents as strictly opposed to any “public management” concern and only interesting and valuable inasmuch as its findings are democratically discussed. In the French context, put at its extreme, this conception of evaluation is linked to a “voluntaristic” (Jacobin, and often lacking substance) stance in politics.
The idea that good management practice and democracy are incompatible, although indeed very strange, is commonplace in France. One interesting example was provided recently by R. Forni, at the time president of the French National Assembly, valuing what he thought to be the “voluntaristic” government success, as opposed to the trivial activity of costing programs: “Even when they are in the opposition, politicians certainly have better things to do than inspecting the details of figures and funding channels, as if they were small-time bookkeepers” (Le Monde 6 June 2001). Leca (1997:2–14) suggests that the only coherent approach to the management/democracy divide is to consider that the two dimensions are closely interlinked. Altogether, the management/democracy controversy has been an important factor explaining why organizational initiative and agreement on a set of evaluation norms have been so difficult in the French evaluation milieu.
Interestingly, the “pluralistic approach,” described by Duran et al. (1995) as “à la française,” provides some sort of third way out of the sterile debate. Why is it so? The pluralistic approach can be seen as the quest for a new methodology (or design—in the broadest sense of the term), which is based on certain norms of quality but, because of its particular approach to the qu...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Table of Contents
  6. Foreword
  7. Introduction
  8. Part 1: Evaluation
  9. Part 2: Performance Audits
  10. Part 3: Performance Reports
  11. Part 4: Conclusion
  12. About the Authors
  13. Index