Introduction to Theory-Driven Program Evaluation
eBook - ePub

Introduction to Theory-Driven Program Evaluation

Culturally Responsive and Strengths-Focused Applications

Stewart I. Donaldson

Share book
  1. 278 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Introduction to Theory-Driven Program Evaluation

Culturally Responsive and Strengths-Focused Applications

Stewart I. Donaldson

Book details
Book preview
Table of contents
Citations

About This Book

Introduction to Theory-Driven Program Evaluation provides a clear guide for practicing evaluation science, and numerous examples of how these evaluations actually unfold in contemporary practice. A special emphasis is placed how to conduct theory-driven program evaluations that are culturally responsive and strengths-focused.

In this thoroughly revised new edition, author Stewart I. Donaldson provides a state-of-the art treatment of the basics of conducting theory-driven program evaluations. Each case follows a three-step model: developing program impact theory; formulating and prioritizing evaluation questions; and answering evaluation questions. The initial chapters discuss the evolution and popularity of theory-driven program evaluation, as well as step-by-step guide for culturally responsive and strengths-focused applications. Succeeding chapters provide actual cases and discuss the practical implications of theory-driven evaluation science. Reflections, challenges, and lessons learned across numerous cases from practices are discussed.

The volume is of significant value to practicing evaluators, professors of introductory evaluation courses and their students, advanced undergraduate and graduate students, and serves as a text or a supplementary text for a wide range of evaluation and applied research courses. It is also of great interest to those interested in the connections between work and health, well-being, career development, human service organizations, and organizational improvement and effectiveness.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Introduction to Theory-Driven Program Evaluation an online PDF/ePUB?
Yes, you can access Introduction to Theory-Driven Program Evaluation by Stewart I. Donaldson in PDF and/or ePUB format, as well as other popular books in Business & Operations. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2021
ISBN
9781000430462
Edition
2
Subtopic
Operations

PART I
Foundations and Strategies for Theory-Driven Program Evaluation

1
The Evolution of Program Theory-Driven Evaluation Science

This book is about using evaluation science to help promote human welfare and improve the quality of human lives in the 21st century. It focuses on ways to use a modern tool, generally referred to as evaluation science, to develop people, as well as groups of people working toward common goals. These social groups are commonly known as programs and organizations.
Improving organizations and the programs they implement to prevent and/or ameliorate social, health, educational, community, and organizational problems that threaten the well-being of people from all segments of our global community is the pursuit at hand. More specifically, this book explores applications of how evaluation science can be used to develop and improve people, teams, programs, and organizations dedicated to promoting health, well-being, and positive functioning. Given that we live in a time of unprecedented change, uncertainty, and human suffering, this book will indeed be exploring the application and potential of a very useful, powerful, and critically important human-designed tool.
The history of humankind is filled with examples of the development and refinement of tools created to solve the pressing problems of the times (e.g., various types of spears in the hunter–gatherer days, machines in the industrial era, and computers in the information age). The modern-day tool I refer to as evaluation science has evolved over the past five decades, and is now being widely used in efforts to help prevent and ameliorate a variety of human and social problems.
As with other tools, many creative minds have tried to improve on earlier prototypes of evaluation. These vigorous efforts to improve the effectiveness of evaluation, often in response to inefficiencies or problems encountered in previous attempts to promote human welfare, have left us with a range of options or brands from which to choose (Alkin, 2013; Donaldson & Scriven, 2003a,b; Mertens & Wilson, 2018; Shadish, Cook, & Leviton, 1991). For example, now available on the shelf at your global evaluation tool store are prescriptions for how to practice:
  • traditional experimental and social science approaches (Bickman & Reich, 2015; Henry, 2015; Shadish, Cook, & Campbell, 2002);
  • transdisciplinary approaches based on the logic of evaluation (Donaldson, 2013; Scriven, 2003, 2013, 2015, 2016);
  • utilization-focused approaches (Patton, 2021);
  • participatory and empowerment approaches (Cousins, 2019; Donaldson, 2017; Fetterman et al., 2017);
  • equity-focused approaches (Donaldson & Picciotto, 2016);
  • culturally responsive approaches (Bledsoe & Donaldson, 2015; Hood, Hopson, & Frierson, 2014);
  • feminist and gender approaches (Podems, 2010, 2014);
  • values engaged approaches (Greene, 2005; Hall, Ahn, & Greene, 2012);
  • environmental sustainability approaches (Parsons, Dhillon, & Keene, 2019; Patton, 2018);
  • realist approaches (Lemire et al., 2021; Mark, Henry, & Julnes, 2000; Pawson & Tilley, 1997);
  • theory-driven evaluation science approaches that attempt to integrate concepts from most of the other approaches (Chen, 2015; Donaldson, 2007, this volume; Donaldson & Lipsey, 2006; Leeuw & Donaldson, 2015; Rossi in Shadish, Cook, & Leviton, 1991; Weiss, 1997, 2010); and
  • many other approaches that can be found in recent catalogs by Alkin (2012) and Mertens and Wilson (2018).
This proliferation of evaluation approaches has led evaluation scholars to spend some of their professional time developing catalogs profiling the tools and suggesting when we might use a particular brand (Alkin, 2012; Mertens & Wilson, 2018; Stufflebeam & Coryn, 2014). Others have taken a different approach and tried to determine which brands are superior (Shadish, Cook, & Leviton, 1991), whereas yet another group has strongly argued that some brands are ineffective and should be kept off the shelf altogether (Scriven, 1997, 1998; Stufflebeam, 2001).
One potential side effect of the rapid expansion of brands and efforts to categorize and critique the entire shelf at a somewhat abstract level is that a deep understanding of the nuances of any one evaluation approach is compromised. It is precisely this detailed understanding of the potential benefits and challenges of using each evaluation approach across various problems and settings that will most likely advance contemporary evaluation practice.
Therefore, the main purpose of this book is to examine arguably one of the most evolved, integrative, and popular evaluation approaches in detail—Program Theory-Driven Evaluation Science. This is accomplished by exploring the practical steps involved in practice, a variety of applications, and detailed cases including findings and lessons learned from completed theory-driven evaluations. In addition, one chapter describes a proposal of how to use program theory-driven evaluation science to address a hypothetical evaluation problem presented and critiqued by scholars studying the differences between modern evaluation approaches (Alkin & Christie, 2005). These cases and examples illustrate how theory-driven evaluation science can be used to develop and improve programs and organizations, and suggest ways to refine and simplify the practice of program theory-driven evaluation science.

Comprehensive Theory of Evaluation Practice

History of Evaluation Theory

Shadish et al. (1991) examined the history of theories of program evaluation practice and developed a three-stage model showing how evaluation practice evolved from an emphasis on truth to an emphasis on use, and then to an emphasis toward integrating diverse theoretical perspectives. The primary assumption in this framework is that the fundamental purpose of evaluation theory is to specify feasible practices that evaluators can use to construct knowledge of the value of social programs that can be used to ameliorate the social problems to which the programs are relevant.
The criteria of merit used to evaluate each theory of evaluation practice were clearly specified by Shadish et al. (1991). A good theory of evaluation practice was expected to have an excellent knowledge base corresponding to:
  1. Knowledge—what methods to use to produce credible knowledge.
  2. Use—how to use knowledge about social programs.
  3. Valuing—how to construct value judgments.
  4. Practice—how evaluators should practice in “real world” settings.
  5. Social Programming—the nature of social programs and their role in social problem solving.
In their final evaluation of these theories of evaluation practice, Shadish et al. (1991) concluded that the evaluation theories of Stage I theorists focused on truth (represented by Michael Scriven and Donald Campbell); Stage II theorists focused on use (represented by Carol Weiss, Joseph Wholey, and Robert Stake); and that both Stages I and II evaluation theories were inadequate and incomplete in different ways. In other words, they were not judged to be excellent across the five criteria of merit just listed.
In contrast, Stage III theorists (Lee Cronbach and Peter Rossi) developed integrative evaluation theories that attempted to address all five criteria by integrating the lessons learned from the previous two stages. They tried not to leave out any legitimate practice or position from their theories of practice and denied that all evaluators ought to be following the same evaluation procedures under all conditions. Stage III theories were described as contingency theories of evaluation practice, and were evaluated much more favorably than the theories of practice from the previous two stages.
In Shadish et al.’s (1991) groundbreaking work on theories of evaluation practice, Peter Rossi’s seminal formulation of theory-driven evaluation was classified as a Stage III Evaluation Theory of Practice, the most advanced form of evaluation theory in this framework, and was evaluated favorably across the five criteria of merit. Theory-driven evaluation was described as a comprehensive attempt to resolve dilemmas and incorporate the lessons from the applications of past theories to evaluation practice; it attempted to incorporate the desirable features of past theories without distorting or denying the validity of these previous positions on how to practice evaluation.
Shadish et al. (1991) noted that the theory-driven evaluation approach was a very ambitious attempt to bring coherence to a field in considerable turmoil and debate, and that the integration was more or less successful from topic to topic. Rossi’s (2004) theory-driven approach to evaluation offered three fundamental concepts to facilitate integration:
  1. Comprehensive Evaluation—studying the design and conceptualization of an intervention, its implementation, and its utility.
  2. Tailored Evaluation—evaluation questions and research procedures depend on whether the program is an innovative intervention, a modification or expansion of an existing effort, or a well-established, stable activity.
  3. Theory-Driven Evaluation—constructing models of how programs work, using the models to guide question formulation and data gathering; similar to what econometricians call model specification.
These three concepts remain fundamental to the discussion that follows in this book about how to use theory-driven evaluation science to develop and improve modern programs and organizations.
The notion of using a conceptual framework or program theory grounded in relevant substantive knowledge to guide evaluation efforts seemed to take hold in the 1990s. The work of Chen and Rossi (1983, 1987) argued for a movement away from atheoretical, method-driven evaluations, and offered hope that program evaluation would be viewed as and become more of a rigorous and thoughtful scientific endeavor. Chen (1990) provided the first text on theory-driven evaluation, which became widely used and cited in the program evaluation literature. Furthermore, three of the most popular (best-selling) textbooks on program evaluation (Chen, 2015; Rossi, Lipsey, & Henry, 2019; Weiss, 1997) are firmly based on the tenets of theory-driven evaluation science, and offer specific instruction on how to express, assess, and use program theory in evaluation (Donaldson & Crano, 2011; Donaldson & Lipsey, 2006; Leeuw & Donaldson, 2015).
Although Shadish et al.’s (1991) framework has widely been cited and used to organize the field for over a decade, it is now controversial in a number of ways. For example:
  1. Were all the important theorists and theories of practice included?
  2. Was each position accurately represented?
  3. Was the evaluation of each theory of practice valid?
I seriously doubt that most of the theorists evaluated and their followers would answer “yes” to all three questions. On the contrary, I would suspect arguments that the authors’ biases, assumptions, or agendas influenced how they described others’ work and characterized the field of evaluation theory.

Looking Toward the Future

Although knowledge of the history of evaluation theories can provide us with important insights, practicing evaluators tend to be most concerned with current challenges and the future of evaluation practice. More than a decade after Shadish et al.’s (1991) work, Donaldson and Scriven (2003b) employed an alternative methodology to explore potential futures for evaluation practice. Instead of describing, classifying, or evaluating others’ work over the past three decades, they invited a diverse group of evaluators and evaluation theorists to represent their own views at an interactive symposium on the future of evaluation practice. A “last lecture” format was used to encourage each participant to articulate a vision for “How We Should Practice Evaluation in the New Millennium.” That is, based on what each evaluator had learned over her or his evaluation career, she or he was asked to give a last lecture, passing on advice and wisdom to the next generation about how we should evaluate social programs and problems in the 21st century. In addition to six-vision presentations, five prominent evaluators were asked to g...

Table of contents