Mind the Gap
eBook - ePub

Mind the Gap

Perspectives on Policy Evaluation and the Social Sciences

  1. 254 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Mind the Gap

Perspectives on Policy Evaluation and the Social Sciences

About this book

Over the past twenty to thirty years, evaluation has become increasingly important to the field of public policy. The number of people involved and specializing in evaluation has also increased markedly. Evidence of this trend can be found in the International Atlas of Evaluation, the establishment of new journals and evaluation societies, and the increase in systems of evaluation. Increasingly, the main reference point has become an assessment of the merit and value of interventions as such rather than the evaluator's disciplinary background. This growing importance of evaluation as an activity has also led to an increasing demand for the type of competencies evaluators should have.Evaluation began as a niche area within the social and behavioral sciences. It subsequently became linked to policy research and analysis, and has, more recently, become trans-disciplinary. This volume demonstrates an association between the evaluation tradition in a particular country or policy field and the nature of the relationship between social and behavioral science research and evaluative practice. This book seeks to offer comprehensive data, which lead to conclusions about patterns that transcend the gap between evaluation and the social scientific disciplines.Mind the Gap has a twofold aim. The first is to highlight and characterize the gap between evaluation practices and debates, and the substantive knowledge debates within the social and behavioral sciences. The second is to show why this gap is problematic for the practice of evaluation, while at the same time illustrating possible ways to build bridges. The book is centered on the value of producing useful evaluations grounded in social science theory and research.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Mind the Gap by Jos Vaessen, Jos Vaessen,Frans L. Leeuw in PDF and/or ePUB format, as well as other popular books in Politics & International Relations & Politics. We have over one million books available in our catalogue for you to explore.

Part I
The Evolving Relationship between Evaluation and the Disciplines

1
US Sociology and Evaluation: Issues in the Relationship between Methodology and Theory

Nicoletta Stame

Introduction

Theory-based evaluation (TBE) approaches that now play an important role in evaluation emphasize the need for utilizing theoretical thinking as a way around the black-box trap, in which the inability of an evaluation to account for what happens between the program input and its outcome leads to unsatisfactory results (Stame, 2004). Theories may illuminate not only how something good ought to happen (normative) or whether a thing did or did not happen (descriptive), but also for whom, where, and why it happened (explanatory).1 In particular, they can explain why things have (not) happened. Such theories may be the underlying, often implicit, theories of change that policy makers, front office workers (etc.) have in mind when they design and implement the program (Leeuw, 2003), or may be elaborated by the stakeholders or the evaluator. They may, therefore, be diverse, multiple and conflicting; and it is the craft of evaluation to select those worthy of investigation and put them to the test (Weiss, 1997).
TBE approaches have asserted themselves in opposition to the profusion of method-oriented approaches that prevailed in the recent past (Pawson and Tilley, 1997), especially among evaluators of social programs who had received a sociological upbringing in the US. TBE approaches are not method-specific: the method should fit the problem at hand, and no method is in itself preferable (methodological pluralism).
This chapter will look at the role of US sociology in setting this scene. It will inquire into the way the discipline reacted to the call for social program evaluation during the wave of Great Society programs in the 1960s, examine how the tension between methodology and theory in the discipline has influenced the progress of evaluation, and finally it will investigate the legacy of US sociology with regard to recent developments in evaluation practice.
My interest in this issue has been recently revived by Ann Oakley’s Experiments in Knowing (2000). Oakley fights for a sound methodology2 that is able to graft qualitative insights onto the experimental tradition, a methodology that was developed early in American sociology and put to use in social issues from education to criminology, including evaluation. In any case, it was her concentration on methodology that helped me frame what had been my principal research problem for some time: why the sociological tradition coming from the Bureau of Applied Social Research—where “middle range” theories and research methods were so intertwined—had not played the central role in evaluation for which it was so well qualified.
I came to believe that what mattered in evaluation was the ability to blend theoretical and methodological tools which could serve as a basis not only for testing the linear causality underpinning programs conceived as “rational actions,” but also for explaining—to use Merton’s expression—“the unexpected results of purposive social action” (Merton, 1936: 894). As long as the sociological traditions that faced the challenge of social programs evaluation took linear causality for granted and were mainly concerned with method, they remained unable to grasp the core of the evaluative thrust. And as soon as other alternatives began to surface—as they are doing now—evaluation was better able to address the problems it was confronted with in the policy field.

Social Programs and the Origin of Sociology in the US

In the 1960s social problems of urban degradation, unemployment and race discrimination became so pervasive in the US that a War on Poverty was declared, as part of L.B. Johnson’s “Great Society” program. Public programs were put in place as an alternative to malfunctioning markets. This was so alien to the American tradition (the New Deal experience notwithstanding) that sunset legislation was included, establishing that such programs, often conceived as experiments, would be periodically evaluated so that a decision could be made as to whether to continue, change or suspend them. In an optimistic vein, social programs were conceived as providing solutions to problems affecting the beneficiaries, who would find jobs and live better lives, go to better schools and embark on satisfactory careers, have better homes which facilitated family life, and so on. A great deal of attention was thus paid to social behavior as an outcome, and sociologists and social psychologists were commissioned to do the first evaluations. This disciplinary link resulted in a confinement to methodological concerns that became the key to the success of social reform.
American sociology had been founded on the idea that it was a “science” (in opposition to the arts and humanities) and that “society was a laboratory” in which hypotheses about social behavior, mainly formulated as linear theories of the kind “if a) then b),” had to be tested by the scientific method and its canons of objectivity, rigor and empiricism (Oakley, 2000). At the beginning of the twentieth century, interest in methodological matters was predominant over theoretical issues for both the competing schools of scientific sociology: the Columbia school, which promoted social experiments; and the Chicago school, where the emphasis was on qualitative methods of research.3 Then, during the first half of the century, things evolved. At Columbia, Paul Lazarsfeld and Robert Merton collaborated in the Bureau of Applied Social Research.4 They focused on “society in movement,” rather than on “society as a laboratory.” They directed their attention to theories in a way that was previously unknown. Lazarsfeld added to experimentation a larger panoply of methodological strategies (the survey, panel studies) which were better suited to new research interests. Merton (1949) developed a theory on latent and manifest functions as a critique of classical functionalism,5 and detached himself from the grand theorizing of Talcott Parsons,6 who was very influential in the academic scene at the time, by stressing the link between research and “middle-range” theories (Merton, 1968). The latter were supposed to draw middle-range generalizations from the observation of the (un)intended consequences of planned social change (see also Pawson in this volume).
The turning point in US sociology is considered to be the American Soldier, a massive study about soldiers’ attitudes to fighting conducted by Stouffer (1949), a sociologist from Chicago, with the collaboration of sociologists and social psychologists from various schools. Among the many experiments that were conducted, some could be considered real evaluative research designs, especially in the case of studies on the effects of propaganda. Moreover, the data produced by this research offered Merton the material for an elaboration of his famous middle range theories of “reference groups behavior,” which was based on the concepts of “anticipatory socialization,” “relative deprivation” and “divided loyalties,” as mechanisms that linked individual behavior to social structure through role structure (Merton, 1949; Martire, 2006).
When sociologists were called upon to meet the challenge of the evaluation of social programs (Caro, 1971), both the experimentalist tradition and that of the Bureau seemed endowed with a suitable conceptual apparatus and a research methodology.7 The social experiment tradition, which was born at Columbia but had by that time also migrated to the Mid-West (University of Chicago and Northwestern University), promised to test whether an outcome was the net effect of a social intervention. The survey tradition, developed at the Bureau, provided a means of understanding the effects of planned social actions on people’s attitudes and behaviors. For both of them, however, the methodological involvement became the core of the evaluative effort, and the theoretical concerns that had nonetheless worried brilliant minds in both camps (Campbell and Merton, respectively) were relegated to the back burner. At first the experimental tradition held centre stage; after that their fortunes alternated. They still confront each other today, although under different guises.

Experimental Evaluation

The experimental design, in the version of the (quasi-) experiments proposed by Campbell, accounted for the need to conduct experiments outside the laboratory and in the real world, and soon came to be considered “the golden rule” for program evaluation. It was credited with the ability to assess the effectiveness of an intervention (a treatment, the independent variable) by demonstrating that it was the “necessary cause” of that effect (dependent variable), and that its results could be repeated anywhere as long as it was administered the same way (generalization).8 Experimentation became the word for evaluation: not only with respect to evaluation design, but also for programs, which were conceived as ever larger social experiments, as a copious literature on social experimentation has recorded.9 Experimentalists were seen as “methodological specialists (who) focused on the problem of assessing the impact of social change” (Campbell, 1979: 67).
Experimentalists were soon confronted with the difficulty of explaining whether the expected results were in fact the effect of the program (good results), or even whether programs had produced any results at all (the null effect). Most evaluations conducted with experimental designs showed hardly any positive difference between experimental and control groups, a dismaying situation for people advocating social reform who were afraid that a negative result could be utilized by politicians to stop the programs.
There were two ways of approaching the problem of the missing results. The first dealt with substance. As Rossi had noted, programs addressed chronically insoluble problems: it might have been simply unrealistic to think that experiments could work like miracles (Rossi, 1969). This brought into question the logic of existing programs and led to requests for better theories to guide them: a path which Chen and Rossi (1983) undertook later on, with the proposal for theory-driven evaluations, but that may not have seemed easy at the time.
The second approach was mainly methodological, and it attracted more attention: if it was not possible to prove anything, the experiment had not been done correctly. Most of the literature on experimental design deals with such topics as the selection of experimental and control groups, statistical validity, measurement of treatment, etc. This literature acknowledges the difficulties of conducting experiments in the field, and often ends up with minimalist conclusions: experiments can be successfully done where certain conditions hold, such as clear goals, a real need for causal inference, and small programs.10
However, experimentalists could not disregard the way theories impacted on methodology. Suchman (1967) is often quoted as having said that a null effect could be attributed either to a failure in program implementation, or to a failure in program theory. In a review of 175 experimental evaluations, Lipsey et al. (1985: 21) cluster the studies into non-theoretical (“black box treatments”), sub-theoretical (in which only program strategy or program principles are mentioned), and theoretical (containing “specific formulations linking elements of the program to desired outcomes”). Failure to provide a theory was responsible for bad operationalization of variables, and thus for low-quality research.11
Campbell himself maintained that if a causal link could not be found between intervention and effect one should look for plausible rival hypotheses: if not attained, the desired effect could be sought by means of another intervention; if attained, it could have been the result of causes other than the intervention (as in the case of the reduction in mortality rates after the introduction of speed limits on motorways in Connecticut, analyzed in Campbell (1968)). Instead of “nothing works,” one could say that “something else works.” Campbell admitted that unless the promoter of a program was also the “experimenting administrator,” he might not be interested in another program that perhaps could be implemented by political opponents in the next term.12 At the time, however, very few—even among his fellow experimentalists—followed Campbell in the “visionary” message contained in seeing “reforms as experiments.”

The Columbia School and the Bureau of Applied Social Research

The sociological tradition which had grown up around the Bureau of Applied Social Research played a smaller role in the business of the evaluation of the social programs of the War on Poverty than could have been expected, considering its record in the previous decade. The main debates in evaluation, as those on attribution of causality or on valuing, were not initiated by Bureau people. Moreover, the core conceptual framework of the Bureau did not frame the evaluation discourse. And yet, many elements would have predicted a stronger presence.
First, their work was identified with applied social research, which is what evaluation is. Their predilection for applied social research had even put their academic position at risk. A well-established distinction existed between, on the one hand, basic research engaged in the discovery of new theories and methodologies and, on the other, applied research repeating routine devices, undertaken for th...

Table of contents

  1. Cover Page
  2. Title Page
  3. Copyright Page
  4. Contents
  5. Foreword
  6. Preface
  7. Introduction
  8. Part I: The Evolving Relationship between Evaluation and the Disciplines
  9. 1. US Sociology and Evaluation: Issues in the Relationship between Methodology and Theory
  10. 2. Evaluation and the Disciplines in Development
  11. 3. Economics and Evaluation
  12. Part II: Evaluation and Its Disciplinary Basis
  13. 4. The Intellectual Underpinnings of a Comprehensive Body of Evaluative Knowledge: The Case of INTEVAL
  14. 5. Public Management Theory, Evaluation, and Evidence-Based Policy
  15. Part III: Bridging the Gap between Evaluation and the Disciplines
  16. 6. Interventions as Theories: Closing The Gap between Evaluation and the Disciplines?
  17. 7. Middle Range Theory and Program Theory Evaluation: From Provenance to Practice 1
  18. 8. Realistic Evaluation and Disciplinary Knowledge: Applications from the Field of Criminology 1
  19. Contributors
  20. Index