Evaluating Social Programs and Problems
eBook - ePub

Evaluating Social Programs and Problems

Visions for the New Millennium

  1. 230 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Evaluating Social Programs and Problems

Visions for the New Millennium

About this book

Today's evaluators are being challenged to help design and evaluate social programs intended to prevent and ameliorate complex social problems in a variety of settings, including schools, communities, and not-for-profit and for-profit organizations. Drawing upon the knowledge and experience of world-renowned evaluators, the goal of this new book is to provide the most up-to-date theorizing about how to practice evaluation in the new millennium. It features specific examples of evaluations of social programs and problems, including the strengths and weaknesses of the most popular and promising evaluation approaches, to help readers determine when particular methods are likely to be most effective. As such, it is the most comprehensive volume available on modern theories of evaluation practice.

Evaluating Social Programs and Problems presents diverse, cutting-edge perspectives articulated by prominent evaluators and evaluation theorists on topics including, but not limited to:
*Michael Scriven on evaluation as a trans-discipline;
*Joseph S. Wholey on results-oriented management;
*David Fetterman on empowerment evaluation;
*Yvonna S. Lincoln on fourth-generation evaluation;
*Donna M. Mertens on inclusive evaluation;
*Stewart I. Donaldson on theory-driven evaluation; and
*Melvin M. Mark on an integrated view of diverse visions for evaluation.

Evaluating Social Programs and Problems is a valuable resource and should be considered required reading for practicing evaluators, evaluators-in-training, scholars and teachers of evaluation and research methods, and other professionals interested in improving social problem-solving efforts in the new millennium.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Evaluating Social Programs and Problems by Stewart I. Donaldson, Michael Scriven, Stewart I. Donaldson,Michael Scriven in PDF and/or ePUB format, as well as other popular books in Psychology & History & Theory in Psychology. We have over one million books available in our catalogue for you to explore.

Information

II
VISIONS FOR EVALUATING SOCIAL PROGRAMS AND PROBLEMS

2
Evaluation in the New Millennium: The Transdisciplinary Vision

Michael Scriven
Claremont Graduate University
A vision in the sense here is an imagined and hoped-for future—an aspiration—not a hard-money prediction of what will actually occur. My vision for the future of evaluation has several components, some of which coincide with my expectations but many of which do not. On this occasion, we’re allowed to dream, and hence perhaps by dreaming to nudge the future a little nearer to our dreams. I shall first talk about the discipline of evaluation, by contrast with the practice of evaluation, as we distinguish the discipline of jurisprudence or medicine or pedagogy by contrast with the common practice of it.
1. First, I hope—and, in this case, expect—that the essential nature of evaluation itself will crystallize in our minds into a clear and essentially universal recognition of it as a discipline, a discipline with a clear definition, subject matter, logical structure, and multiple fields of application. In particular, it will, I think, become recognized as one of that elite group of disciplines which I call transdisciplines (using this term in a slightly different but related way to that employed by President Upham in his welcoming remarks). These disciplines are notable because they supply essential tools for other disciplines, while retaining an autonomous structure and research effort of their own. More on this ‘service function’ in a moment.
2. Second, I hope to see a gradual but profound transformation of the social sciences under the influence of evaluation in the following three ways.
2.1 Applied social science will divide into the progressive, evaluation-enriched, school, and the conservative, evaluation-impaired school. The evaluation-enriched group—continuing to be led, we hope, by the School of Behavioral & Organizational Sciences at Claremont Graduate University—will become the winner in nearly all bids for contracts aimed at separating solutions from non-solutions of social/educational problems. The evaluation-impaired branch, following in the tracks of typical applied social science departments today, will gradually wither on the vine, with its aging adherents exchanging stories about the good old days.
Now, we should not forget that they were good old days, from at least the point of view of psychological science. Experiments were run in those days, before the notion of informed consent had become a constraint, that we could never get away with today, and we learned some very interesting things about human behavior from them. We learned that following instructions is more important than causing extreme pain to innocent victims, even for those brought up in our own relatively democratic society; we learned, from Hartshorne and May, amongst others, that our standard conceptualizations of behavior often rest firmly on completely unfounded assumptions about stereotypes; and we learned from Meehl and Dawes that experienced clinicians can’t match the predictions of inexperienced statisticians armed with a longitudinal database. But none of this learning solved social problems, although it helped head off some popular non-solutions. When it comes down to determining whether specific solutions work for specific problems such as reducing crime, controlling the abuse of alcohol, assisting refugees from the dot.coms to find another job, then we need serious social program evaluators. Anything less lets in the snake-oil salesmen, like those peddling the DARE program—who, in a move that is most auspicious for our millennial vision, although two years or more overdue with the attendant costs—finally got their comeuppance a week ago when the program’s supporters capitulated.
A key point in the war against snake-oil is that it can’t be won by those who just have a PhD in what is now generally thought of as the applied social sciences. Contrary to popular belief amongst faculty in the more traditional programs of that genre, that’s not always enough to make you competent to reliably distinguish worthless from competent programs. It’s a great start, but in many a race, it stops well short of the stretch. Later in this chapter, you’ll find a list of some of the missing elements from the conventional applied psych PhD’s repertoire—it’s called the Something More List. Mind you, that’s only a list of the curriculum component of what’s missing: the application skills must also be acquired.
So, academic social science training will be radically different, from course content to internship experiences: it will include large components of evaluation, both the logic and the application practice. That’s the first of the big changes in the social sciences that evaluation will, I am certain, eventually produce.
2.2 Second, I expect to see a change in the metaview of social science from that of the last century. The metaview is the view of the nature of the subject itself—the conception of the proper paradigm for it—held by the educated citizenry, not just by the social scientists whose activities it fuels. The last century was dominated by the value-free paradigm of the social sciences; this century will, I hope, be dominated by the paradigm of what I’ll call the “evaluative social sciences.” The social sciences will be seen as the proper home of the scientific study of evaluative questions. This not only includes the descriptive study of values and those who hold them—a role they have always had without dispute—but as the home range of normative evaluative inquiry, meaning inquiry whose conclusions are directly evaluative, directly about good and bad solutions to social problems, directly about right and wrong approaches, directly about better and worse problems.
Consequent upon the change in the metaview of social science by social scientists, I expect to see a major change in the public view of the social sciences. This will be an acutely bivalent change, with the conservative forces arguing that the new evaluative social science is a devil that has escaped from its proper confinement, and the more enlightened group realizing that the confinement had been a confidence trick that took them in, and whose destruction liberates them to attack the great social problems in a scientific and comprehensive way.
Of course, I am not suggesting that the social sciences should relinquish any part of their traditional role in analyzing the configuration and causation of social and behavioral phenomena; but rather that they must add an extra dimension to that, in order to take on the extra topic of true evaluative research. An extreme example of the results of this change in the ideology of, and conception of, the social sciences is discussed in more detail below. Under the present heading however, I want to give an illustration of the present thesis by reminding you of what happened in the analogous case of measurement, where S.S.Stevens added a whole new dimension to courses in social science methods by extending the repertoire of useful tools for tackling problems. In my vision of the future of methods courses, evaluation will do the same by adding coverage of the evaluative family of concepts, and then plunging into the methodology of evaluation, which has its own substantial territory, including some areas that merely extend existing techniques (e.g., of cost analysis), and others that introduce new techniques (e.g., of values critique and weighting). It is this change that will revolutionize the application of the social sciences to social problems.
2.3 Third, we come to the question, naturally arising from reading the brochure for this symposium, of the relation between evaluating social problems and solving them. The conventional stance in evaluation has been “You cook ‘em, and we’ll taste ‘em” but I’m going to argue that the division of labor is neither optimal nor realistic. For those of us who have long worked as professional evaluators, the traditional distinction has turned out to be not only blurred in practice but one that often misses the best path to a solution. It may be time to reconsider it completely, perhaps by reflecting on the following options. I think that evaluation in the future will be seen as legitimately including a limited range of problem-solving activities that include some of these options.
There are at least two paths that practicing evaluators of proposed solutions to social problems quite often follow, thereby turning into cosolution providers, a phenomenon that is analogous to the way that great editors are often nearer to coauthors than the conventional wisdom supposes. The first of these cases is the one that leads so often to the evaluator’s lament that “I could have done so much more if only they had brought me in earlier.” This connects with the practice of evaluability critique, whose distinguished originator, Joe Wholey, is with us today. If we are in at the planning stage, as is most appropriate for evaluability analysis, we can often bring focus to a fuzzy plan by asking exactly how anyone will be able to tell whether the plan has succeeded or not (by contrast with the spurious substitute often proposed, namely, how do we tell whether the plan was implemented as promised, or not). Sometimes more appropriately, we may ask whether or how it represents a new approach at all. Used too crudely, these ‘shaping questions’—legitimate formative evaluation commentary on a plan—run the risk of becoming what is often rightly condemned as ‘the evaluation driving the program.’ But used well, what in fact often happens is that a good evaluator, like the good editor, sees and suggests a way to modify, rather than merely clarify, the original concept in ways that make it more valuable and more distinctive.
The second main way in which this happens involves the enlightened use of theory-driven evaluation. By ‘enlightened’ I mean an approach that is clear from the beginning about the tripartite role of program theories. Nearly always, one needs to distinguish between:
A. The alleged program theory (this is usually what is meant by the term program logic) according to which the program is believed to operate by the major stakeholders; the one behind the commitment of the program designer and other active stakeholders, usually including the program’s funders. Sometimes good to start with this, just as it’s sometimes good to start by identifying the program goals; but sometimes better not to be cued that much (Cf. goal-free evaluation).
B. The real logic of the program, that is, the machinery map according to which the program in fact operates, as it runs in the field. This is usually but not always different from A, and is sometimes well known to the field staff of the program, even if not the top administrators: but sometimes has to be discovered by the evaluator. Often, in the field, it becomes clear that one of the cogs in the alleged engine doesn’t work but can be bypassed; and sometimes it just needs more grease than the design specifies. The real program logic may or may not be superior to the alleged program logic (i.e., more effective or more efficient). The evaluator has to decide whether to take on the task of answering that question, which requires some time (if it’s possible at all) and is not strictly speaking part of the primary evaluation task, namely, to find out how well the actual program performs. Evaluators are so often imbued with the search for explanations by their training in the social sciences that they can’t shake off the feeling that explaining what’s happening is part of their job, whereas in fact it’s often the grave of the evaluation or at least of its budget. Of course, other things being equal, it’s nice to find the explanations but we all know how often other things are equal.
C. The optimal program theory—the account of how the program should operate, in order to achieve optimal effectiveness from available resources. Is this different from theories A and B? That’s something that the evaluator may decide she or he needs to discover: sometimes it’s obvious that there’s a better way to organize subsystems at least, for example, the information flow, or the supply ordering, and the ev...

Table of contents

  1. THE STAUFFER SYMPOSIUM ON APPLIED PSYCHOLOGY AT THE CLAREMONT COLLEGES
  2. Dedication
  3. CONTENTS
  4. Preface
  5. I INTRODUCTION
  6. II VISIONS FOR EVALUATING SOCIAL PROGRAMS AND PROBLEMS
  7. III REACTIONS AND ALTERNATIVE VISIONS
  8. ABOUT THE CONTRIBUTORS
  9. AUTHOR INDEX
  10. SUBJECT INDEX