II
VISIONS FOR EVALUATING SOCIAL PROGRAMS AND PROBLEMS
2
Evaluation in the New Millennium: The Transdisciplinary Vision
Michael Scriven
Claremont Graduate University
A vision in the sense here is an imagined and hoped-for futureâan aspirationânot a hard-money prediction of what will actually occur. My vision for the future of evaluation has several components, some of which coincide with my expectations but many of which do not. On this occasion, weâre allowed to dream, and hence perhaps by dreaming to nudge the future a little nearer to our dreams. I shall first talk about the discipline of evaluation, by contrast with the practice of evaluation, as we distinguish the discipline of jurisprudence or medicine or pedagogy by contrast with the common practice of it.
1. First, I hopeâand, in this case, expectâthat the essential nature of evaluation itself will crystallize in our minds into a clear and essentially universal recognition of it as a discipline, a discipline with a clear definition, subject matter, logical structure, and multiple fields of application. In particular, it will, I think, become recognized as one of that elite group of disciplines which I call transdisciplines (using this term in a slightly different but related way to that employed by President Upham in his welcoming remarks). These disciplines are notable because they supply essential tools for other disciplines, while retaining an autonomous structure and research effort of their own. More on this âservice functionâ in a moment.
2. Second, I hope to see a gradual but profound transformation of the social sciences under the influence of evaluation in the following three ways.
2.1 Applied social science will divide into the progressive, evaluation-enriched, school, and the conservative, evaluation-impaired school. The evaluation-enriched groupâcontinuing to be led, we hope, by the School of Behavioral & Organizational Sciences at Claremont Graduate Universityâwill become the winner in nearly all bids for contracts aimed at separating solutions from non-solutions of social/educational problems. The evaluation-impaired branch, following in the tracks of typical applied social science departments today, will gradually wither on the vine, with its aging adherents exchanging stories about the good old days.
Now, we should not forget that they were good old days, from at least the point of view of psychological science. Experiments were run in those days, before the notion of informed consent had become a constraint, that we could never get away with today, and we learned some very interesting things about human behavior from them. We learned that following instructions is more important than causing extreme pain to innocent victims, even for those brought up in our own relatively democratic society; we learned, from Hartshorne and May, amongst others, that our standard conceptualizations of behavior often rest firmly on completely unfounded assumptions about stereotypes; and we learned from Meehl and Dawes that experienced clinicians canât match the predictions of inexperienced statisticians armed with a longitudinal database. But none of this learning solved social problems, although it helped head off some popular non-solutions. When it comes down to determining whether specific solutions work for specific problems such as reducing crime, controlling the abuse of alcohol, assisting refugees from the dot.coms to find another job, then we need serious social program evaluators. Anything less lets in the snake-oil salesmen, like those peddling the DARE programâwho, in a move that is most auspicious for our millennial vision, although two years or more overdue with the attendant costsâfinally got their comeuppance a week ago when the programâs supporters capitulated.
A key point in the war against snake-oil is that it canât be won by those who just have a PhD in what is now generally thought of as the applied social sciences. Contrary to popular belief amongst faculty in the more traditional programs of that genre, thatâs not always enough to make you competent to reliably distinguish worthless from competent programs. Itâs a great start, but in many a race, it stops well short of the stretch. Later in this chapter, youâll find a list of some of the missing elements from the conventional applied psych PhDâs repertoireâitâs called the Something More List. Mind you, thatâs only a list of the curriculum component of whatâs missing: the application skills must also be acquired.
So, academic social science training will be radically different, from course content to internship experiences: it will include large components of evaluation, both the logic and the application practice. Thatâs the first of the big changes in the social sciences that evaluation will, I am certain, eventually produce.
2.2 Second, I expect to see a change in the metaview of social science from that of the last century. The metaview is the view of the nature of the subject itselfâthe conception of the proper paradigm for itâheld by the educated citizenry, not just by the social scientists whose activities it fuels. The last century was dominated by the value-free paradigm of the social sciences; this century will, I hope, be dominated by the paradigm of what Iâll call the âevaluative social sciences.â The social sciences will be seen as the proper home of the scientific study of evaluative questions. This not only includes the descriptive study of values and those who hold themâa role they have always had without disputeâbut as the home range of normative evaluative inquiry, meaning inquiry whose conclusions are directly evaluative, directly about good and bad solutions to social problems, directly about right and wrong approaches, directly about better and worse problems.
Consequent upon the change in the metaview of social science by social scientists, I expect to see a major change in the public view of the social sciences. This will be an acutely bivalent change, with the conservative forces arguing that the new evaluative social science is a devil that has escaped from its proper confinement, and the more enlightened group realizing that the confinement had been a confidence trick that took them in, and whose destruction liberates them to attack the great social problems in a scientific and comprehensive way.
Of course, I am not suggesting that the social sciences should relinquish any part of their traditional role in analyzing the configuration and causation of social and behavioral phenomena; but rather that they must add an extra dimension to that, in order to take on the extra topic of true evaluative research. An extreme example of the results of this change in the ideology of, and conception of, the social sciences is discussed in more detail below. Under the present heading however, I want to give an illustration of the present thesis by reminding you of what happened in the analogous case of measurement, where S.S.Stevens added a whole new dimension to courses in social science methods by extending the repertoire of useful tools for tackling problems. In my vision of the future of methods courses, evaluation will do the same by adding coverage of the evaluative family of concepts, and then plunging into the methodology of evaluation, which has its own substantial territory, including some areas that merely extend existing techniques (e.g., of cost analysis), and others that introduce new techniques (e.g., of values critique and weighting). It is this change that will revolutionize the application of the social sciences to social problems.
2.3 Third, we come to the question, naturally arising from reading the brochure for this symposium, of the relation between evaluating social problems and solving them. The conventional stance in evaluation has been âYou cook âem, and weâll taste âemâ but Iâm going to argue that the division of labor is neither optimal nor realistic. For those of us who have long worked as professional evaluators, the traditional distinction has turned out to be not only blurred in practice but one that often misses the best path to a solution. It may be time to reconsider it completely, perhaps by reflecting on the following options. I think that evaluation in the future will be seen as legitimately including a limited range of problem-solving activities that include some of these options.
There are at least two paths that practicing evaluators of proposed solutions to social problems quite often follow, thereby turning into cosolution providers, a phenomenon that is analogous to the way that great editors are often nearer to coauthors than the conventional wisdom supposes. The first of these cases is the one that leads so often to the evaluatorâs lament that âI could have done so much more if only they had brought me in earlier.â This connects with the practice of evaluability critique, whose distinguished originator, Joe Wholey, is with us today. If we are in at the planning stage, as is most appropriate for evaluability analysis, we can often bring focus to a fuzzy plan by asking exactly how anyone will be able to tell whether the plan has succeeded or not (by contrast with the spurious substitute often proposed, namely, how do we tell whether the plan was implemented as promised, or not). Sometimes more appropriately, we may ask whether or how it represents a new approach at all. Used too crudely, these âshaping questionsââlegitimate formative evaluation commentary on a planârun the risk of becoming what is often rightly condemned as âthe evaluation driving the program.â But used well, what in fact often happens is that a good evaluator, like the good editor, sees and suggests a way to modify, rather than merely clarify, the original concept in ways that make it more valuable and more distinctive.
The second main way in which this happens involves the enlightened use of theory-driven evaluation. By âenlightenedâ I mean an approach that is clear from the beginning about the tripartite role of program theories. Nearly always, one needs to distinguish between: