Practical Program Evaluation
eBook - ePub

Practical Program Evaluation

Theory-Driven Evaluation and the Integrated Evaluation Perspective

  1. 464 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Practical Program Evaluation

Theory-Driven Evaluation and the Integrated Evaluation Perspective

About this book

The Second Edition of Practical Program Evaluation shows readers how to systematically identify stakeholders' needs in order to select the evaluation options best suited to meet those needs. Within his discussion of the various evaluation types, Huey T. Chen details a range of evaluation approaches suitable for use across a program's life cycle. At the core of program evaluation is its body of concepts, theories, and methods. This revised edition provides an overview of these, and includes expanded coverage of both introductory and more cutting-edge techniques within six new chapters. Illustrated throughout with real-world examples that bring the material to life, the Second Edition provides many new tools to enrich the evaluator's toolbox. 

 

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Practical Program Evaluation by Huey T. Chen in PDF and/or ePUB format, as well as other popular books in Sozialwissenschaften & Wissenschaftliche Forschung & Methodik. We have over one million books available in our catalogue for you to explore.

Part I Introduction

The first three chapters of this book, which comprise Part I, provide general information about the theoretical foundations and applications of program evaluation principles. Basic ideas are introduced, and a conceptual framework is presented. The first chapter explains the purpose of the book and discusses the nature, characteristics, and strategies of program evaluation. In Chapter 2, program evaluators will find a systematic typology of the various evaluation approaches one can choose among when faced with particular evaluation needs. Chapter 3 introduces the concepts of logic models and program theory, which underlie many of the guidelines found throughout the book.
Image 1

Chapter 1 Fundamentals of Program Evaluation

The programs that evaluators can expect to assess have different names such as treatment program, action program, or intervention program. These programs come from different substantive areas, such as health promotion and care, education, criminal justice, welfare, job training, community development, and poverty relief. Nevertheless, they all have in common organized efforts to enhance human well-being—whether by preventing disease, reducing poverty, reducing crime, or teaching knowledge and skills. For convenience, programs and policies of any type are usually referred in this book as “intervention programs” or simply “programs.” An intervention program intends to change individuals’ or groups’ knowledge, attitudes, or behaviors in a community or society. Sometimes, an intervention program aims at changing the entire population of a community; this kind of program is called a population-based intervention program.

The Nature of Intervention Programs and Evaluation: A Systems View

The terminology of systems theory (see, e.g., Bertalanffy, 1968; Ryan & Bohman, 1998) provides a useful means of illustrating how an intervention program works as an open system, as well as how program evaluation serves the program. In a general sense, as an open system an intervention program consists of five components (input, transformation, outputs, environment, and feedback), as illustrated in Figure 1.1.
Figure 1
Figure 1.1 A Systems View of a Program

Inputs.

Inputs are resources the program takes in from the environment. They may include funding, technology, equipment, facilities, personnel, and clients. Inputs form and sustain a program, but they cannot work effectively without systematic organization. Usually, a program requires an implementing organization that can secure and manage its inputs.

Transformation.

A program converts inputs into outputs through transformation. This process, which begins with the initial implementation of the treatment/intervention prescribed by a program, can be described as the stage during which implementers provide services to clients. For example, the implementation of a new curriculum in a school may mean the process of teachers teaching students new subject material in accordance with existing instructional rules and administrative guidelines. Transformation also includes those sequential events necessary to achieve desirable outputs. For example, to increase students’ math and reading scores, an education program may need to first boost students’ motivation to learn.

Outputs.

These are the results of transformation. One crucial output is the attainment of the program’s goals, which justifies the existence of the program. For example, an output of a treatment program directed at individuals who engage in spousal abuse is the end of the abuse.

Environment.

The environment consists of any factors that, despite lying outside a program’s boundaries, can nevertheless either foster or constrain that program’s implementation. Such factors may include social norms, political structures, the economy, funding agencies, interest groups, and concerned citizens. Because an intervention program is an open system, it depends on the environment for its inputs: clients, personnel, money, and so on. Furthermore, the continuation of a program often depends on how the general environment reacts to program outputs. Are the outputs valuable? Are they acceptable? For example, if the staff of a day care program is suspected of abusing children, the environment would find that output unacceptable. Parents would immediately remove their children from the program, law enforcement might press criminal charges, and the community might boycott the day care center. Finally, the effectiveness of an open system, such as an intervention program, is influenced by external factors such as cultural norms and economic, social, and political conditions. A contrasting system may be illustrative: In a biological system, the use of a medicine to cure an illness is unlikely to be directly influenced by external factors such as race, culture, social norms, or poverty.

Feedback.

So that decision makers can maintain success and correct any problems, an open system requires information about inputs and outputs, transformation, and the environment’s responses to these components. This feedback is the basis of program evaluation. Decision makers need information to gauge whether inputs are adequate and organized, interventions are implemented appropriately, target groups are being reached, and clients are receiving quality services. Feedback is also critical to evaluating whether outputs are in alignment with the program’s goals and are meeting the expectations of stakeholders. Stakeholders are people who have a vested interest in a program and are likely be affected by evaluation results; they include funding agencies, decision makers, clients, program managers, and staff. Without feedback, a system is bound to deteriorate and eventually die. Insightful program evaluation helps to both sustain a program and prevent it from failing. The action of feedback within the system is indicated by the dotted lines in Figure 1.1.
To survive and thrive within an open system, a program must perform at least two major functions. First, internally, it must ensure the smooth transformation of inputs into desirable outcomes. For example, an education program would experience negative side effects if faced with disruptions like high staff turnover, excessive student absenteeism, or insufficient textbooks. Second, externally, a program must continuously interact with its environment in order to obtain the resources and support necessary for its survival. That same education program would become quite vulnerable if support from parents and school administrators disappeared.
Thus, because programs are subject to the influence of their environment, every program is an open system. The characteristics of an open system can also be identified in any given policy, which is a concept closely related to that of a program. Although policies may seem grander than programs—in terms of the envisioned magnitude of an intervention, the number of people affected, and the legislative process—the principles and issues this book addresses are relevant to both. Throughout the rest of the book, the word program may be understood to mean program or policy.
Based upon the above discussion, this book defines program evaluation as the process of systematically gathering empirical data and contextual information about an intervention program—specifically answers to what, who, how, whether, and why questions that will assist in assessing a program’s planning, implementation, and/or effectiveness. This definition suggests many potential questions for evaluators to ask during an evaluation: The “what” questions include those such as, what are the intervention, outcomes, and other major components? The “who” questions might be, who are the implementers and who are the target clients? The “how” questions might include, how is the program implemented? The “whether” questions might ask whether the program plan is sound, the implementation adequate, and the intervention effective. And the “why” questions could be, why does the program work or not work? One of the essential tasks for evaluators is to figure out which questions are important and interesting to stakeholders and which evaluation approaches are available for evaluators to use in answering the questions. These topics will be systematically discussed in Chapter 2. The purpose of program evaluation is to make the program accountable to its funding agencies, decision makers, or other stakeholders and to enable program management and implementers to improve the program’s delivery of acceptable outcomes.

Classic Evaluation Concepts, Theories, and Methodologies: Contributions and Beyond

Program evaluation is a young applied science; it began developing as a discipline only in the 1960s. Its basic concepts, theories, and methodologies have been developed by a number of pioneers (Alkin, 2013; Shadish, Cook, & Leviton, 1991). Their ideas, which are foundational knowledge for evaluators, guide the design and conduct of evaluations. These concepts are commonly introduced to readers in two ways. The conventional way is to introduce classic concepts, theories, and methodologies exactly as proposed by these pioneers. Most major evaluation textbooks use this popular approach.
This book, however, not only introduces these classic concepts, theories, and methodologies but also demonstrates how to use them as a foundation for formulating additional evaluation approaches. Readers can not only learn from evaluation pioneers’ contributions but also expand or extend their work, informed by lessons learned from experience or new developments in program evaluation. However, there is a potential drawback to taking this path. It requires discussing the strengths and limitations of the work of the field’s pioneers. Such critiques may be regarded as intended to diminish or discredit this earlier work. It is important to note that the author has greatly benefited from the classic works in the field’s literature and is very grateful for the contributions of those who developed program evaluation as a discipline. Moreover, the author believes that these pioneers would be delighted to see future evaluators follow in their footsteps and use their accomplishments as a basis for exploring new territory. In fact, the seminal authors in the field would be very upset if they saw future evaluators still working with the same ideas, without making progress. It is in this spirit that the author critiques the literature of the field, hoping to inspire future evaluators to further advance program evaluation.
Indeed, the extension or expansion of understanding is essential for advancing program evaluation. Readers will be stimulated to become independent thinkers and feel challenged to creatively apply evaluation knowledge in their work. Students and practitioners who read this book will gain insights from the discussions of different options, formulate their own views of the relative worth of these options, and perform better work as they go forward in their careers.

Evaluation Typologies

Stakeholders need two kinds of feedback from evaluation. The first kind is information they can use to improve a program. Evaluations can function as improvement-oriented assessments that help stakeholders understand whether a program is running smoothly, whether there are problems that need to be fixed, and how to make the program more efficient or more effective. The second kind of feedback evaluations can provide is an accountability-oriented assessment of whether or not a program has worked. This information is essential for program managers and staff to fulfill their obligation to be accountable to various stakeholders.
Different styles of evaluation have been developed to serve these two types of feedback. This section will first discuss Scriven’s (1967) classic distinction between formative and summative evaluation and then introduce a broader evaluation typology.

The Distinction Between Formative and Summative Evaluation

Scriven (1967) made a crucial contribution to evaluation by introducing the distinction between formative and summative evaluation. According to Scriven, formative evaluation fosters improvement of ongoing activities. Summative evaluation, on the other hand, is used to assess whether results have met the stated goals. Summative evaluation informs the go or no-go decision, that is, whether to continue or repeat a program or not. Scriven initially developed this distinction from his experience of curriculum assessment. He viewed the role of formative evaluation in relation to the ongoing improvement of the curriculum, while the role of summative evaluation serves administrators by assessing the entire finished curriculum. Scriven (1991a) provided more elaborated descriptions of the distinction. He defined formative evaluation as “evaluation designed, done, and intended to support the process of improvement, and normally commissioned or done, and delivered to someone who can make improvement” (p. 20). In the same article, he defined summative evaluation as “the rest of evaluation; in terms of intentions, it is evaluation done for, or by, any observers or decision makers (by contrast with developers) who need valuative conclusions for any other reasons besides development.” The distinct purposes of these two kinds of evaluation have played an important role in the way that evaluators communicate evaluation results to stakeholders.
Scriven (1991a) indicated that the best illustration of the distinction between formative and summative evaluation is the analogy given by Robert Stake: “When the cook tastes the soup, that’s formative evaluation; when the guest tastes it, that’s summative evaluation” (Scriven, p. 19). The cook tastes the soup while it is cooking in case, for example, it needs more salt. Hence, formative evaluation happens in the early stages of a program so the program can be improved as needed. On the other hand, the...

Table of contents

  1. Cover
  2. Half Title
  3. Acknowledgements
  4. Title Page
  5. Copyright Page
  6. Contents
  7. Preface
  8. Acknowledgements
  9. About the Author
  10. Part I Introduction
  11. Chapter 1 Fundamentals of Program Evaluation
  12. Chapter 2 Understand Approaches to Evaluation and Select Ones That Work The Comprehensive Evaluation Typology
  13. Chapter 3 Logic Models and the Action Model/Change Model Schema (Program Theory)
  14. Part II Program Evaluation to Help Stakeholders Develop a Program Plan
  15. Chapter 4 Helping Stakeholders Clarify a Program Plan Program Scope
  16. Chapter 5 Helping Stakeholders Clarify a Program Plan Action Plan
  17. Part III Evaluating Implementation
  18. Chapter 6 Constructive Process Evaluation Tailored for the Initial Implementation
  19. Chapter 7 Assessing Implementation in the Mature Implementation Stage
  20. Part IV Program Monitoring and Outcome Evaluation
  21. Chapter 8 Program Monitoring and the Development of a Monitoring System
  22. Chapter 9 Constructive Outcome Evaluations
  23. Chapter 10 The Experimentation Evaluation Approach to Outcome Evaluation
  24. Chapter 11 The Holistic Effectuality Evaluation Approach to Outcome Evaluation
  25. Chapter 12 The Theory-Driven Approach to Outcome Evaluation
  26. Part V Advanced Issues in Program Evaluation
  27. Chapter 13 What to Do if Your Logic Model Does Not Work as Well as Expected
  28. Chapter 14 Formal Theories Versus Stakeholder Theories in Interventions Relative Strengths and Limitations
  29. Chapter 15 Evaluation and Dissemination Top-Down Approach Versus Bottom-Up Approach
  30. References
  31. Index
  32. Publisher Note