Evaluating the Complex
eBook - ePub

Evaluating the Complex

Attribution, Contribution and Beyond

  1. 362 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Evaluating the Complex

Attribution, Contribution and Beyond

About this book

In the economic atmosphere following the crisis of 2008, not only have governments reacted by creating more complex policy initiatives, but they have also promised that all of these initiatives will be evaluated. Due to the complexity of many of the initiatives, the ways of evaluating are becoming equally complex.

The book begins with a theoretical and conceptual explanation of the process and shows how this translates into the practice of evaluation. The chapters cover a wide variety of subjects, such as poverty, homelessness, smoking prevention, HIV/AIDS, and child labor. The use of case studies sheds light on the conceptual ideas at work in organizations addressing some of the world's largest and most varied problems.

The evaluation process seeks a balance between order and chaos. The interaction of four elements—simplicity, inventiveness, flexibility, and specificity—allows complex patterns to emerge. The case studies illustrate this framework and provide a number of examples of practical management of complexity, in light of contingency theories of the evaluation process itself. These theories in turn match the complexity of evaluated policies, strategies, and programs. The evaluation process is examined for its impact on policy outcomes and choices.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Evaluating the Complex by Mita Marra, Kim Forss,Mita Marra,Robert Schwartz in PDF and/or ePUB format, as well as other popular books in Política y relaciones internacionales & Política. We have over one million books available in our catalogue for you to explore.

1

Introduction

Kim Forss and Robert Schwartz

A Changing Demand for Information

Problem solving by policy initiative has come to stay. Overarching policy initiatives are now standard modus operandi for governmental and non-governmental organizations (NGOs). Some of these initiatives aim to affect the big problems of the early twenty-first century: poverty, hunger, infectious disease, unhealthy behavior, and income disparity—to name a few. Reminiscent of the American Great Society programs of the 1960s, policymakers in various jurisdictions are allocating more resources to solve big problems. Unlike the Great Society initiatives, new overarching policy initiatives often harness a variety of programs and projects to addressing different aspects of big problems for various population groups. Ten years after the Lisbon Strategy to promote growth and competitiveness in Europe was adopted, it is now going to be evaluated during 2010. Governments have reacted to the financial crisis of late 2008 and the ensuing recession with a variety of policy initiatives. At the same time as billions of dollars, euros, or pounds are allocated in response to the crisis, the tax payers are being promised that it will all be evaluated.
Complex policy initiatives are not only reserved for the big challenges of our times, but also now commonly used for routine matters, such as school achievement and urban planning, public health and safety. Reflecting an understanding that no one program intervention can address all needs, these policy initiatives provide an umbrella of resources and implementation infrastructure to advance various projects adapted to localized conditions. Now, there is an understanding that there is no single right intervention for all needs. This leads to complex strategies.
Overarching policy initiatives can also be found at the supranational and even global levels in a broad range of areas, including environment, security, trade, immigration, and economic and social development. Transjurisdictional fiscal and monetary policies are now common features of overarching policy initiatives. Highly articulated strategies are, therefore, decided globally while they affect people’s lives locally. Of course overarching policy initiatives are not new. What appears to be new is a growing demand for effective evaluations of complex policy interventions at the international, national, and local levels. There is now pressure on politicians, stakeholders, and senior management officials to demonstrate that resources invested in policy initiatives have been well spent. They need to justify the overall cost of the policy and the allocation of policy resources to international, regional, and local programs and projects. Increasingly, they are enquired about the value obtained for the money, relative to alternative investment channels. Evaluators are in turn asked to address these questions. The chapters in this book address the extent of demand for evaluating the effectiveness and cost-effectiveness of complex policy initiatives. Why this demand? Three overlapping mantras of contemporary management offer possible explanations: accountability, results-based management, and evidence-based policy.

Accountability

Elaborating on the depiction of the audit explosion, several observers describe accountability overload in various jurisdictions. Voters love to hear that government will be more accountable. But, who is the government? Voters want to hold elected politicians accountable, but ministers often feel that they do not have sufficient control and information. Hence, politicians rely on evaluation as one of the managerial tools with which they can steer the administration. Not only voters, but also those voted into power, need evaluation. It is really an explosion. The popularity of accountability has not been lost on politicians who have pushed for countless accountability improvement stipulations in supranational, national, and regional jurisdictions (Beck, 1992). Large complex policy initiatives are natural targets for accountability seekers as they expend big chunks of resources.
There are thus many aspects to the audit explosion, which may perhaps better be described as a “changing architecture of accountability.” This goes back a long way. In the United States, the so-called Friedrich-Finer debate of the 1940s was about whether external controls were sufficient enough to ensure accountability, or whether professional and ethical motivations were also necessary. In Sweden, the debate peaked in the late 1990s when the government’s own tool for accountability and performance analysis was taken over by parliament in an effort to strengthen its analytical capacity and its ability to hold government accountable.
The work of performance auditors has expanded. Whereas in the past, it was more confined to assessing the rule of law through correct administration of government decisions and the effective (noncorrupt) use of funds; auditing has come to ask questions about impact; policy impact; and the administration’s ability to learn, adapt, and develop innovative policy responses to social problems. In consequence, performance auditors need to understand long chains of interaction, with feedback loops (Pierre and Paters, 2000; Ling, 2002).
When policies are formulated and accountability for tax payer’s money is promised, the exact nature of that promise is often illusive. Many important issues of the day (such as climate change, migration, terrorism, and urban sprawl) involve managing risks rather than delivering measurable outcomes (Culpitt, 1999). The risks might have been well managed whatever the outcome. The task of assessing the costs and benefits is growing, and those who are usually called upon to provide data and analysis to support accountability find themselves in a booming business. Successful performance audit to establish accountability depends on having a wider clarity in society about its breadth and depth (Lonsdale et al., 2011). All these pressures on accountability make the task of evaluators more complex.
But what is “accountability,” and “evaluation for accountability”? A recent book exploring this topic has indicated (Bemelmans-Videc et al., 2007) that these are not clear. Too often, many approaches to accountability fail to take into account the inherent complexities of public sector policies and interventions, and may perversely act as a disincentive to improved performance and to approaches to complex policy situations.

Results-Based Management

No longer new public management reforms have a lasting effect on getting governments and NGOs to focus on results. NGOs in particular, but also many government services could, in the past, legitimate their existence by pointing to the importance and relevance of the objectives they were striving to achieve. They achieved symbolic legitimacy by working, for example, for children in need, human rights, HIV/AIDS victims. But as demands for demonstrating results and competition for funds have increased, symbolic legitimacy is no longer enough. Demonstrating that resources have been spent and activities have been conducted is no longer sufficient. Stakeholders, politicians, and senior managers insist on knowing what has been achieved with the resources. Results count, and those who can point to results stand a better chance in the fund-raising game. There are often requirements to divulge results in performance measurement systems, annual reports, and periodic assessments. Results seekers are not often concerned with the difficulties in attributing results to overarching policy initiatives or to particular programs and projects. What they want is data.
And they get it. Although developments differ from country to country, performance indicators and policy targets are increasingly being used in policy documents and budgets to indicate what performance is expected, for what purpose actions are taken and at what cost (Perrin, 2006). According to a 2003 Organisation for Economic Cooperation and Development (OECD)/World Bank survey of budget practice, 32% of OECD member countries include nonfinancial performance data in their budget documents (OECD, 2004). For nearly 27% of OECD member countries, the inclusion of performance targets on government policy outcomes and/or outputs in the budget documents constitutes a legal requirement (OECD, 2004; van der Knaap, 2006).
Even though demand and supply thus seem to meet, there is still a gap to bridge. The point is well illustrated by Pollitt and Bouckaerdt (2009) who addresses the “missing link, namely the use of performance information by ministers, parliamentarians and citizens.” Grand statements about the importance of performance information for democracy sit alongside extensive if patchy evidence that ministers, legislators, and citizens rarely make use of the volumes of performance indicators that are thrust upon them. There is no doubt that performance information is there and of adequate quality and quantity, but is it used? However, the issue of use is not a linear phenomenon, where one can trace decisions back to the conclusions and recommendations in any report. Information on results do not speak for themselves, the question on what to do in light of performance rests on much more than what monitoring and evaluation systems have to say. “Clear thinking and bold action, based as always on inadequate evidence, are all we have to see us through to whatever the future holds” (MacNeill, 1982). For all the efforts of results-based management that the past decade has seen, there still seems to be an ever-growing demand for performance information.
There is an increasing recognition of the inadequacy of information on results provided by monitoring the performance indicators. Those who strive for results-based management seek better understanding of the influence of policies and programs on changes that they see in performance indicators at the macro-level.

Evidence-Based Decision-Making

The current interest in evidence and evidence-based policies and practices is in part an effect of a more general trend in evaluating results. The number of evaluations conducted since the 1960s has steadily increased as well as the demand for performance management and improved quality of public services. About 20 years ago, it became apparent within the medical field that it was necessary to systematize available knowledge from the many primary studies. Such meta-analyses would prevent individualistic, ineffective, and even harmful patterns of care. This marked the start of the evidence movement. It first appeared in the United Kingdom and the United States, later spread to other parts of the world, as well as to other sectors of society, notably social work and education. Evidence-based policies and practices are now to be found in a wide range of policy areas and countries.
The central idea of the evidence concept is that policies and practices should be based on the best available scientific research about what works, what does not, and the reasons why. Therefore, the sheer number of primary research studies must be organized in a systematic, reliable, and cumulative way. Through these systematic reviews of research, explicit and judicious use of current best available evidence is made possible. In order to be included in the systematic reviews, the primary studies must use valid and reliable research methods. In some fields such as medicine, it is generally argued that only experimental studies should be included. Others accept a range of approaches and evaluation designs.
A related and more recent development is the movement to include practice-based evidence alongside more traditional scientific evidence in the accumulation of knowledge about what works, when, and why. Many development agencies systematically worked with synthesis studies irrespective of the methodological foundation as long as the evidence appears to be valid and reliable. During the past 2 years, Danida has commissioned such studies on evidence in combating HIV/AIDS, microfinance, business sector development, and others are on the way.
In the study of scientific method the debate between disciplines has raged for long. The experimental methods have proved eminently successful in many natural sciences and they have been used to some extent in social sciences too, particularly in economics. But in other sciences the qualitative methods predominate, for example, in history, jurisprudence, archaeology, anthropology, and ethnology. In others (sociology, political science, management studies) there is a fierce methodological debate.
Accountability fever, results-based management, and the evidence-based policy movement contribute to a sense that everything can and should be evaluated. Indeed in many jurisdictions, evaluation is now a standard operating procedure, automatically included in budget and work plans. This ought to please evaluators. Indeed there is now an abundance of evaluation work to be done. But the debate on evaluation and policy-making still show a sense of frustration—that we still do not know enough, either to shape the future or to know what happened in the past. Does that reflect something about the inherent nature of describing and accounting for results? As long as the demand is contained to the project and program levels, the evaluation tool-kit is sufficiently well-stocked to cope with a variety of evaluation needs. Complex policy initiatives, however, challenge evaluators in new and daunting ways, beyond the scope of what existing tools of the trade can manage. But what is really this complexity about? Before proceeding to see how evaluation as a field and evaluators as individuals can seek to respond to the quests for evidence, performance information, and accountability, the issue of complexity itself needs to be addressed.

Complexity: A New Evaluation Context

It is now several years since the debate on evaluation started talking about complexity (see, e.g., Patton, 2002). When the UK Evaluation Society organized its Annual Conference in 2005, its theme was complexity in evaluations (1). A year later the joint conference of the UK and European evaluation societies had a large number of presentations that dealt with increasing complexity in evaluation. The annual conferences of the American Evaluation Association have also had an increasing number of papers and speeches on the subject of complexity over the past couple of years. In a survey of critical development in the evaluation profession, Michael Patton listed “complexity” as a key feature, and so do many others who survey the field and speak about it on the various professional conferences. Complexity is frequently listed as one of the keywords in articles in the professional journals. It seems as complexity is here to stay.
Or is it possibly a fad that will have passed after a few years, as new buzzwords come to dominate the professional discussion? The answer to that question could be both “yes” and “no” depending on what is actually understood by the term complexity. There is no doubt that some evaluation assignments are more difficult to solve than others, and perhaps such assignments are getting more common. The text above has illustrated how evaluations have moved upscale from simple projects to multifaceted and intersectoral policy initiatives. The questions that politicians and administrators ask are broader and thus require more skills and resources from evaluation teams. But is that what is meant by complexity?
Sometimes what looks like a rather straightforward project evaluation may turn out to be rather difficult to do. People might not want to provide information, there could be one or two stakeholders that block the process, or the data you get could be hard to interpret, or could point in two or more directions at the same time. Such difficulties arise not only out of multifaceted and intersectoral policy evaluation, but can be met in ordinary project evaluations too. Evaluators refer to such problems as examples of complexity in the process. True, any evaluation can encounter such difficulties, but does that mean that it is also complex?

Meanings of Complexity

Looking at how the word complexity is used in the evaluation community, it appears that the answer to both of the above questions would be “yes” that is what is meant by complexity. It would mean that the assignment as such is broad and refers to multifaceted assessments of social change, that the evaluators need a high degree of professional, scientific skills, that there are several stakeholder groups involved in the evaluation process with differing political interests and different strategies to influen...

Table of contents

  1. Cover Page
  2. Comparative Policy Evaluation
  3. Title page
  4. Copyright page
  5. Contents
  6. Foreword
  7. 1 Introduction
  8. 2 Implications of Complicated and Complex Characteristics for Key Tasks in Evaluation
  9. 3 Contribution Analysis
  10. 4 Micro, Meso, and Macro Dimensions of Change
  11. 5 Coping with the Evaluability Barrier
  12. 6 Monitoring and Evaluation of a Multi-Agency Response to Homelessness
  13. 7 Evaluating a Complex Policy in a Complex Context
  14. 8 Intervention Path Contribution Analysis (IPCA) for Complex Strategy Evaluation
  15. 9 Responding to a Global Emergency and Evaluating That Response—The Case of HIV/AIDS
  16. 10 Evaluating Complex Strategic Development Interventions: The Challenge of Child Labor
  17. 11 Challenges in Impact Evaluation of Development Interventions
  18. 12 Some Insights from Complexity Science for the Evaluation of Complex Policies
  19. Contributors
  20. Name Index
  21. Subject Index