Handbook of Practical Program Evaluation
eBook - ePub

Handbook of Practical Program Evaluation

Kathryn E. Newcomer, Harry P. Hatry, Joseph S. Wholey

Partager le livre
  1. English
  2. ePUB (adapté aux mobiles)
  3. Disponible sur iOS et Android
eBook - ePub

Handbook of Practical Program Evaluation

Kathryn E. Newcomer, Harry P. Hatry, Joseph S. Wholey

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

The leading program evaluation reference, updated with the latest tools and techniques

The Handbook of Practical Program Evaluation provides tools for managers and evaluators to address questions about the performance of public and nonprofit programs. Neatly integrating authoritative, high-level information with practicality and readability, this guide gives you the tools and processes you need to analyze your program's operations and outcomes more accurately. This new fourth edition has been thoroughly updated and revised, with new coverage of the latest evaluation methods, including:

  • Culturally responsive evaluation
  • Adopting designs and tools to evaluate multi-service community change programs
  • Using role playing to collect data
  • Using cognitive interviewing to pre-test surveys
  • Coding qualitative data

You'll discover robust analysis methods that produce a more accurate picture of program results, and learn how to trace causality back to the source to see how much of the outcome can be directly attributed to the program. Written by award-winning experts at the top of the field, this book also contains contributions from the leading evaluation authorities among academics and practitioners to provide the most comprehensive, up-to-date reference on the topic.

Valid and reliable data constitute the bedrock of accurate analysis, and since funding relies more heavily on program analysis than ever before, you cannot afford to rely on weak or outdated methods. This book gives you expert insight and leading edge tools that help you paint a more accurate picture of your program's processes and results, including:

  • Obtaining valid, reliable, and credible performance data
  • Engaging and working with stakeholders to design valuable evaluations and performance monitoring systems
  • Assessing program outcomes and tracing desired outcomes to program activities
  • Providing robust analyses of both quantitative and qualitative data

Governmental bodies, foundations, individual donors, and other funding bodies are increasingly demanding information on the use of program funds and program results. The Handbook of Practical Program Evaluation shows you how to collect and present valid and reliable data about programs.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Handbook of Practical Program Evaluation est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  Handbook of Practical Program Evaluation par Kathryn E. Newcomer, Harry P. Hatry, Joseph S. Wholey en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Business et Organizzazioni non profit e di beneficenza. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Éditeur
Jossey-Bass
Année
2015
ISBN
9781118893692

PART ONE
Evaluation Planning and Design

The chapters in Part One discuss a variety of techniques and strategies for planning and designing credible, useful evaluation work. Chapter authors provide guidance relevant to engaging stakeholders, designing evaluation studies including impact evaluations, and designing ongoing monitoring systems.
The chapters cover the following topics:
  • Evaluation planning and design
  • Engaging stakeholders
  • Logic modeling
  • Evaluability assessment and other exploratory evaluation approaches
  • Performance monitoring
  • Comparison group designs
  • Randomized controlled trials
  • Case studies
  • Recruitment and retention of evaluation study participants
  • Multisite evaluations
  • Evaluating community change programs
  • Culturally responsive evaluation
Evaluation design involves balancing evaluation costs with the likely usefulness of the evaluation results. In general, the higher the level of precision, reliability, and generalizability of an evaluation, the higher the evaluation costs in terms of time (calendar time and the time of managers, staff, clients, and others affected by the evaluation process); financial costs; and political and bureaucratic costs, such as perceived disruptions and loss of goodwill among those affected. The value of an evaluation is measured: in the strength of the evidence produced; in the credibility of the evaluation to policymakers, managers, and other intended users; and especially in the use of the evaluation information to improve policies and programs. Matching design decisions to available time and resources is an art, supported by the social sciences.
An evaluation design identifies what questions will be answered by the evaluation, what data will be collected, how the data will be analyzed to answer the questions, and how the resulting information will be used. Each design illuminates an important aspect of reality. Logic modeling is a useful strategy for identifying program components and outcomes, as well as important contextual factors affecting program operations and outcomes. Evaluability assessment explores the information needs of policymakers, managers, and other key stakeholders; the feasibility and costs of answering alternative evaluation questions; and the likely use of evaluation findings—for example, to improve program performance or to communicate the value of program activities to policymakers or other key stakeholders. Performance monitoring systems and descriptive case studies answer questions that ask for description: What's happening? Comparison group designs, randomized experiments, and explanatory case studies answer questions that ask for explanation: Why have these outcomes occurred? What difference does the program make? Many evaluations use a combination of these approaches to answer questions about program performance.

The Chapters

The editors, in Chapter One, describe how to match evaluation approaches to information needs, identify key contextual elements shaping the use of evaluation, produce the methodological rigor needed to support credible findings, and design responsive and useful evaluations.
John Bryson and Michael Patton, in Chapter Two, describe how to identify and engage intended users and other key evaluation stakeholders and how to work with stakeholders to help determine the mission and goals of an evaluation. They highlight the need for flexibility and adaptability in responding to rapidly changing evaluation situations.
John McLaughlin and Gretchen Jordan, in Chapter Three, discuss the logic model, which provides a useful tool for: planning, program design, and program management; communicating the place of a program in a larger organization or context; designing performance monitoring systems and evaluation studies; and framing evaluation reports so that the evaluation findings tell the program's performance story. They describe how to construct and verify logic models for new or existing programs. They also present examples of both basic and complex logic models and identify resources and tools that evaluators can use to learn about and construct logic models.
Joseph Wholey, in Chapter Four, describes evaluability assessment, rapid feedback evaluation, evaluation synthesis, and small-sample studies, each of which produces evaluation findings and helps focus future evaluation work. Evaluability assessment assesses the extent to which programs are ready for useful evaluation and helps key stakeholders come to agreement on evaluation criteria and intended uses of evaluation information. Rapid feedback evaluation is an extension of evaluability assessment that produces estimates of program effectiveness, indications of the range of uncertainty in those estimates, tested designs for more definitive evaluation, and further clarification of intended uses of evaluation information. Evaluation synthesis summarizes what is known about program effectiveness on the basis of all relevant research and evaluation studies. Small-sample studies can be used to test performance measures that are to be used in evaluation work. Wholey describes each of these four exploratory evaluation approaches and indicates when one or another of these approaches might be appropriate.
Theodore Poister, in Chapter Five, discusses performance measurement systems: systems for ongoing monitoring of program outcomes. He describes how to design and implement performance measurement systems that will provide information that can be used to improve program performance—without creating disruptions and other negative consequences. Poister focuses particular attention on development of good performance measures and effective presentation of performance information to decision makers.
Gary Henry, in Chapter Six, describes a variety of comparison group designs that evaluators frequently use to make quantitative estimates of program impacts (the causal effects of programs) by comparing the outcomes for those served by a program with the outcomes for those in a comparison group who represent what would have occurred in the absence of the program. He notes that comparison group designs represent alternatives to randomized controlled trials, in which members of the target population are randomly assigned to program participation (treatment) or to an untreated control group, and notes that comparison group designs are often the only practical means available for evaluators to provide evidence about program impact. Henry's chapter will help evaluators to improve their evaluation designs as much as circumstances permit—and will help evaluators to state the limitations on the findings of evaluations based on comparison group designs.
Carole Torgerson, David Torgerson, and Celia Taylor, in Chapter Seven, discuss randomized controlled trials (RCTs), in which participants are randomly assigned to alternative treatments. These authors discuss the barriers to wider use of RCTs but argue that carefully planned RCTs are not necessarily expensive and that the value of the information they provide on program impact often outweighs their cost.
Karin Martinson and Carolyn O'Brien, in Chapter Eight, discuss case studies, which integrate qualitative and quantitative data from multiple sources and present an in-depth picture of the implementation and results of a policy or program within its context. They distinguish three types of case studies: exploratory case studies, which aim at defining the questions and hypotheses for a subsequent study; descriptive case studies, which document what is happening and why to show what a situation is like; and explanatory case studies, which focus on establishing cause-and-effect relationships. Martinson and O'Brien present guidelines that show how to design and conduct single-site and multiple-site case studies, how to analyze the large amounts of data that case studies can produce, and how to report case studies in ways that meet the needs of their audiences.
Scott Cook, Shara Godiwalla, Keeshawna Brooks, Christopher Powers, and Priya John, in Chapter Nine, discuss a range of issues concerning recruitment and retention of study participants in an evaluation study. They share best practices in recruitment (obtaining the right number of study participants with the right characteristics) and retention (maximizing the number of participants who continue to provide needed information throughout the evaluation period). Cook and his colleagues describe how to avoid a number of pitfalls in recruitment and retention, noting, for example, that evaluators typically overestimate their ability to recruit and retain study participants and typically underestimate the time required to obtain study clearance from an institutional review board or from the White House Office of Management and Budget.
Debra Rog, in Chapter Ten, provides principles and frameworks for designing, managing, conducting, and reporting on multisite evaluations: evaluations that examine a policy or program in two or more sites. She presents practical tools for designing multisite evaluations, monitoring evaluation implementation, collecting common and single-site data, quality control, data management, data analysis, and communicating evaluation findings.
Brett Theodos and...

Table des matiĂšres