Handbook of Practical Program Evaluation
eBook - ePub

Handbook of Practical Program Evaluation

Kathryn E. Newcomer, Harry P. Hatry, Joseph S. Wholey

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Handbook of Practical Program Evaluation

Kathryn E. Newcomer, Harry P. Hatry, Joseph S. Wholey

Book details
Book preview
Table of contents
Citations

About This Book

The leading program evaluation reference, updated with the latest tools and techniques

The Handbook of Practical Program Evaluation provides tools for managers and evaluators to address questions about the performance of public and nonprofit programs. Neatly integrating authoritative, high-level information with practicality and readability, this guide gives you the tools and processes you need to analyze your program's operations and outcomes more accurately. This new fourth edition has been thoroughly updated and revised, with new coverage of the latest evaluation methods, including:

  • Culturally responsive evaluation
  • Adopting designs and tools to evaluate multi-service community change programs
  • Using role playing to collect data
  • Using cognitive interviewing to pre-test surveys
  • Coding qualitative data

You'll discover robust analysis methods that produce a more accurate picture of program results, and learn how to trace causality back to the source to see how much of the outcome can be directly attributed to the program. Written by award-winning experts at the top of the field, this book also contains contributions from the leading evaluation authorities among academics and practitioners to provide the most comprehensive, up-to-date reference on the topic.

Valid and reliable data constitute the bedrock of accurate analysis, and since funding relies more heavily on program analysis than ever before, you cannot afford to rely on weak or outdated methods. This book gives you expert insight and leading edge tools that help you paint a more accurate picture of your program's processes and results, including:

  • Obtaining valid, reliable, and credible performance data
  • Engaging and working with stakeholders to design valuable evaluations and performance monitoring systems
  • Assessing program outcomes and tracing desired outcomes to program activities
  • Providing robust analyses of both quantitative and qualitative data

Governmental bodies, foundations, individual donors, and other funding bodies are increasingly demanding information on the use of program funds and program results. The Handbook of Practical Program Evaluation shows you how to collect and present valid and reliable data about programs.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Handbook of Practical Program Evaluation an online PDF/ePUB?
Yes, you can access Handbook of Practical Program Evaluation by Kathryn E. Newcomer, Harry P. Hatry, Joseph S. Wholey in PDF and/or ePUB format, as well as other popular books in Business & Organizzazioni non profit e di beneficenza. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Jossey-Bass
Year
2015
ISBN
9781118893692
Edition
4

PART ONE
Evaluation Planning and Design

The chapters in Part One discuss a variety of techniques and strategies for planning and designing credible, useful evaluation work. Chapter authors provide guidance relevant to engaging stakeholders, designing evaluation studies including impact evaluations, and designing ongoing monitoring systems.
The chapters cover the following topics:
  • Evaluation planning and design
  • Engaging stakeholders
  • Logic modeling
  • Evaluability assessment and other exploratory evaluation approaches
  • Performance monitoring
  • Comparison group designs
  • Randomized controlled trials
  • Case studies
  • Recruitment and retention of evaluation study participants
  • Multisite evaluations
  • Evaluating community change programs
  • Culturally responsive evaluation
Evaluation design involves balancing evaluation costs with the likely usefulness of the evaluation results. In general, the higher the level of precision, reliability, and generalizability of an evaluation, the higher the evaluation costs in terms of time (calendar time and the time of managers, staff, clients, and others affected by the evaluation process); financial costs; and political and bureaucratic costs, such as perceived disruptions and loss of goodwill among those affected. The value of an evaluation is measured: in the strength of the evidence produced; in the credibility of the evaluation to policymakers, managers, and other intended users; and especially in the use of the evaluation information to improve policies and programs. Matching design decisions to available time and resources is an art, supported by the social sciences.
An evaluation design identifies what questions will be answered by the evaluation, what data will be collected, how the data will be analyzed to answer the questions, and how the resulting information will be used. Each design illuminates an important aspect of reality. Logic modeling is a useful strategy for identifying program components and outcomes, as well as important contextual factors affecting program operations and outcomes. Evaluability assessment explores the information needs of policymakers, managers, and other key stakeholders; the feasibility and costs of answering alternative evaluation questions; and the likely use of evaluation findings—for example, to improve program performance or to communicate the value of program activities to policymakers or other key stakeholders. Performance monitoring systems and descriptive case studies answer questions that ask for description: What's happening? Comparison group designs, randomized experiments, and explanatory case studies answer questions that ask for explanation: Why have these outcomes occurred? What difference does the program make? Many evaluations use a combination of these approaches to answer questions about program performance.

The Chapters

The editors, in Chapter One, describe how to match evaluation approaches to information needs, identify key contextual elements shaping the use of evaluation, produce the methodological rigor needed to support credible findings, and design responsive and useful evaluations.
John Bryson and Michael Patton, in Chapter Two, describe how to identify and engage intended users and other key evaluation stakeholders and how to work with stakeholders to help determine the mission and goals of an evaluation. They highlight the need for flexibility and adaptability in responding to rapidly changing evaluation situations.
John McLaughlin and Gretchen Jordan, in Chapter Three, discuss the logic model, which provides a useful tool for: planning, program design, and program management; communicating the place of a program in a larger organization or context; designing performance monitoring systems and evaluation studies; and framing evaluation reports so that the evaluation findings tell the program's performance story. They describe how to construct and verify logic models for new or existing programs. They also present examples of both basic and complex logic models and identify resources and tools that evaluators can use to learn about and construct logic models.
Joseph Wholey, in Chapter Four, describes evaluability assessment, rapid feedback evaluation, evaluation synthesis, and small-sample studies, each of which produces evaluation findings and helps focus future evaluation work. Evaluability assessment assesses the extent to which programs are ready for useful evaluation and helps key stakeholders come to agreement on evaluation criteria and intended uses of evaluation information. Rapid feedback evaluation is an extension of evaluability assessment that produces estimates of program effectiveness, indications of the range of uncertainty in those estimates, tested designs for more definitive evaluation, and further clarification of intended uses of evaluation information. Evaluation synthesis summarizes what is known about program effectiveness on the basis of all relevant research and evaluation studies. Small-sample studies can be used to test performance measures that are to be used in evaluation work. Wholey describes each of these four exploratory evaluation approaches and indicates when one or another of these approaches might be appropriate.
Theodore Poister, in Chapter Five, discusses performance measurement systems: systems for ongoing monitoring of program outcomes. He describes how to design and implement performance measurement systems that will provide information that can be used to improve program performance—without creating disruptions and other negative consequences. Poister focuses particular attention on development of good performance measures and effective presentation of performance information to decision makers.
Gary Henry, in Chapter Six, describes a variety of comparison group designs that evaluators frequently use to make quantitative estimates of program impacts (the causal effects of programs) by comparing the outcomes for those served by a program with the outcomes for those in a comparison group who represent what would have occurred in the absence of the program. He notes that comparison group designs represent alternatives to randomized controlled trials, in which members of the target population are randomly assigned to program participation (treatment) or to an untreated control group, and notes that comparison group designs are often the only practical means available for evaluators to provide evidence about program impact. Henry's chapter will help evaluators to improve their evaluation designs as much as circumstances permit—and will help evaluators to state the limitations on the findings of evaluations based on comparison group designs.
Carole Torgerson, David Torgerson, and Celia Taylor, in Chapter Seven, discuss randomized controlled trials (RCTs), in which participants are randomly assigned to alternative treatments. These authors discuss the barriers to wider use of RCTs but argue that carefully planned RCTs are not necessarily expensive and that the value of the information they provide on program impact often outweighs their cost.
Karin Martinson and Carolyn O'Brien, in Chapter Eight, discuss case studies, which integrate qualitative and quantitative data from multiple sources and present an in-depth picture of the implementation and results of a policy or program within its context. They distinguish three types of case studies: exploratory case studies, which aim at defining the questions and hypotheses for a subsequent study; descriptive case studies, which document what is happening and why to show what a situation is like; and explanatory case studies, which focus on establishing cause-and-effect relationships. Martinson and O'Brien present guidelines that show how to design and conduct single-site and multiple-site case studies, how to analyze the large amounts of data that case studies can produce, and how to report case studies in ways that meet the needs of their audiences.
Scott Cook, Shara Godiwalla, Keeshawna Brooks, Christopher Powers, and Priya John, in Chapter Nine, discuss a range of issues concerning recruitment and retention of study participants in an evaluation study. They share best practices in recruitment (obtaining the right number of study participants with the right characteristics) and retention (maximizing the number of participants who continue to provide needed information throughout the evaluation period). Cook and his colleagues describe how to avoid a number of pitfalls in recruitment and retention, noting, for example, that evaluators typically overestimate their ability to recruit and retain study participants and typically underestimate the time required to obtain study clearance from an institutional review board or from the White House Office of Management and Budget.
Debra Rog, in Chapter Ten, provides principles and frameworks for designing, managing, conducting, and reporting on multisite evaluations: evaluations that examine a policy or program in two or more sites. She presents practical tools for designing multisite evaluations, monitoring evaluation implementation, collecting common and single-site data, quality control, data management, data analysis, and communicating evaluation findings.
Brett Theodos and...

Table of contents