Program Evaluation in Practice
eBook - ePub

Program Evaluation in Practice

Core Concepts and Examples for Discussion and Analysis

Dean T. Spaulding

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Program Evaluation in Practice

Core Concepts and Examples for Discussion and Analysis

Dean T. Spaulding

Book details
Book preview
Table of contents
Citations

About This Book

An updated guide to the core concepts of program evaluation

This updated edition of Program Evaluation in Practice covers the core concepts of program evaluation and uses case studies to touch on real-world issues that arise when conducting an evaluation project. This important resource is filled with illustrative examples written in accessible terms and provides a wide variety of evaluation projects that can be used for discussion, analysis, and reflection. The book addresses foundations and theories of evaluation, tools and methods for collecting data, writing of reports, and the sharing of findings. The discussion questions and class activities at the end of each chapter are designed to help process the information in that chapter and to integrate the information from the other chapters, thus facilitating the learning process. As useful for students as it is for evaluators in training, Program Evaluation in Practice is a must-have text for those aspiring to be effective evaluators.

  • Includes expanded discussion of basic theories and approaches to program evaluation
  • Features a new chapter on objective-based evaluation and a new section on ethics in program evaluation
  • Provides more detailed information and in-depth description for each case, including evaluation approaches, fresh references, new readings, and the new Joint Committee Standards for Evaluation

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Program Evaluation in Practice an online PDF/ePUB?
Yes, you can access Program Evaluation in Practice by Dean T. Spaulding in PDF and/or ePUB format, as well as other popular books in Education & Evaluation & Assessment in Education. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Jossey-Bass
Year
2016
ISBN
9781118450208

Part 1
Introduction

Chapter 1
Foundations of Program Evaluation

Learning Objectives

After reading this chapter you should be able to
  1. Provide a basic definition of program evaluation
  2. Understand the different activities conducted by a program evaluator
  3. Understand the difference between formative and summative evaluation
  4. Understand the difference between internal and external evaluation
  5. Understand the difference between program evaluation and research

Program Evaluation Vignette

An urban school district receives a three-year grant to implement an after-school program to improve student academic achievement. As staff start to implement the program, the district administrator realizes that an evaluation of the program is mandatory. The district administrator also realizes that such work requires the expertise of someone from outside the district, and the superintendent, with permission from the school board, hires an external evaluator from a local college. After reviewing the grant, the evaluator conducts an initial review of the program’s curriculum and activities. Next the evaluator develops an evaluation plan and presents it at the next school board meeting. The evaluation plan encompasses the objectives that the evaluator has developed and the tools that he will use to collect the data. The evaluator discusses how the plan will provide two different types of feedback as part of the data collection process. Formative evaluation will be used to address issues as the program is happening. For example, one question might be: Are all the stakeholders aware of the program and its offerings? Summative evaluation will be used to answer the overall evaluation question: Did students in the after-school program have a significant increase in their academic achievement compared to those students who did not participate?
program A temporary set of activities brought together as a possible solution to an existing issue or problem
Formative evaluation A type of evaluation whereby data collection and reporting are focused on the now, providing ongoing, regular feedback to those in charge of delivering the program
summative evaluation A type of evaluation whereby data collection and reporting occur after the program and all activities have taken place
The board approves the plan, and the evaluator spends the following month collecting data for the formative and summative portions of the project.
At the next board meeting the evaluator presents some of the formative evaluation data and reports that there is a need to increase communication with parents. He suggests that the program increase the number of fliers that are sent home, update the school Web site, and work more collaboratively with the parent council. In addition, he notes that there is wide variation in parent education levels within the district and that a large number of parents speak Spanish as their native language. The evaluator recommends that phone calls be made to parents and that all materials be translated into Spanish.
At the end of project year one, summative findings are presented in a final report. The report shows that lack of parent communication is still a problem, and that there is little difference in scores on the standardized measures used to gauge academic achievement between those students who participated in the program and comparable students who did not participate.
Based on the evaluation report, district officials decide to make modifications to the program for the upcoming year. A parent center, which was not part of the original plan, is added, in the belief that this will help increase parent involvement. In addition, the administration decides to cut back on the number of extracurricular activities the after-school program is offering and to focus more on tutoring and academic interventions, hoping that this will increase academic achievement in year two.

What is Program Evaluation?

A common definition used to separate program evaluation from research is that program evaluation is conducted for decision-making purposes, whereas research is intended to build our general understanding and knowledge of a particular topic and to inform practice. In general, program evaluation examines programs to determine their worth and to make recommendations for programmatic refinement and success. Although such a broad definition makes it difficult for those who have not been involved in program evaluation to get a better understanding, it is hoped that the vignette just given highlighted some of the activities unique to program evaluation. Let’s look a little more closely at some of those activities as we continue this comparison between program evaluation and research.

What is a Program?

One distinguishing characteristic of program evaluation is that it examines a program. A program is a set of specific activities designed for an intended purpose, with quantifiable goals and objectives. Although a research study could certainly examine a particular program, most researchers tend to be interested in either generalizing findings back to a wider audience (that is, quantitative research) or discussing how the study’s findings relate back to the literature (that is, qualitative research). With most research studies, especially those that are quantitative, researchers are not interested in knowing how just one after-school program functioned in one school building or district. However, those conducting program evaluations tend to have precisely such a purpose.
Programs come in many different shapes and sizes, and therefore so do the evaluations that are conducted. Educational programs can take place anytime during the school day or after. For example, programs can include a morning breakfast and nutrition program, a high school science program, an after-school program, or even a weekend program. Educational programs do not necessarily have to occur on school grounds. An evaluator may conduct an evaluation of a community group’s educational program or a program at the local YMCA or Boys & Girls Club.

Accessing the Setting and Participants

Another characteristic that sets program evaluation apart from research is the difference in how the program evaluator and the researcher gain access to the project and program site. In the vignette, the program evaluator was hired by the school district to conduct the evaluation of its after-school program. In general, a program evaluator enters into a contractual agreement either directly or indirectly with the group whose program is being evaluated. This individual or group is often referred to as the client.
client An individual or group whom the evaluator is working for directly
Because of this relationship between the program evaluator and the client, the client could restrict the scope of what the evaluator is able to look at. To have the client dictate what one will investigate for a research study would be very unusual. For example, a qualitative researcher who enters a school system to do a study on school safety might find a gang present in the school and choose to follow the experience of students as they try to leave the gang. If a program evaluation were conducted in the same school, the evaluator might be aware of the gang and the students trying to get out of the gang, and this might strike the evaluator as an interesting phenomenon, but the evaluator would not pursue it unless the client perceived it as an important aspect of school safety or unless gang control fit into the original objectives of the program.

Collecting and Using Data

As demonstrated in the vignette, program evaluators often collect two different forms of evaluation data: formative and summative. A further discussion about formative and summative evaluation is presented later in this section; essentially, the purpose of formative data is to change or make better the very thing that is being studied (at the very moment in which it is being studied). Formative data typically is not collected in most applied research approaches. Rarely would the researcher have this reporting relationship, whereby formative findings are presented to stakeholders or participants for the purposes of immediately changing the program.

Changing Practice

Although program evaluators use the same methods as researchers do to collect data, program evaluation is different from research in its overall purpose or intent, as well as in the speed at which it changes practice. The overall purpose of applied research (for example, correlational, case study, or experimental research) is to expand our general understanding of or knowledge about the topic and ultimately to inform practice. Although gathering empirical evidence that supports a new method or approach is certainly a main purpose of applied research, this doesn’t necessarily mean that people will suddenly abandon what they have been doing for years and switch to the research-supported approach.
In the vignette, we can see that change occurred rapidly through the use of program evaluation. Based on the evaluation report, administrators, school board members, and project staff decided to reconfigure the structure of the after-school program and to establish a parent center in the hope of increasing parent involvement. It was also decided that many of the extracurricular activities would be eliminated and that the new focus would be on the tutorial component of the program, in the hope of seeing even more improvement in students’ academic scores in the coming year.
For another example, consider applied research in the area of instructional methods in literacy. In the 1980s the favored instructional approach was whole language; however, a decade of research began to support another approach: phonics. Despite the mounting evidence in favor of phonics, it took approximately a decade for practitioners to change their instruction. In the early 1990s, however, researchers began to examine the benefits of using both whole language and phonics in what is referred to as a blended approach. Again, despite substantial empirical evidence, it took another ten years for many practitioners to use both approaches in the classroom. This is admittedly a simplified version of what occurred; the purpose here is to show the relationship between applied research and practice in regard to the speed (or lack of speed) with which systems or settings that applied researchers evaluate implement changes, based on applied research.
Although there are certainly many program evaluations after which corresponding changes do not occur swiftly (or at all), one difference between program evaluation and research is the greater emphasis in program evaluation on the occurrence of such change. In fact, proponents of certain philosophies and approaches in program evaluation believe that if the evaluation report and recommendations are not used by program staff to make decisions and changes to the program, the entire evaluation was a complete waste of time, energy, and resources (Patton, 1997).

Reporting Findings and Recommendations

Another feature of program evaluation that separates it from research is the way in which program evaluation findings are presented. In conducting empirical research it is common practice for the researcher to write a study for publication—preferably in a high-level, refereed journal. In program evaluation, as shown in the vignette, the findings are presented in what is commonly referred to as the evaluation report, not through a journal article. In addition, the majority of evaluation reports are given directly to the group or client that has hired the evaluator to perform the work and are not made available to others.

Formative and Summative Evaluation

Both quantitative and qualitative data can be collected in program evaluation. Depending on the purpose of and audience for the evaluation, an evaluator may choose to conduct an evaluation that is solely quantitative or solely qualitative, or may take a mixed-methods approach, using quantitative and qualitative data within a project.
The choice of whether to conduct a summative or a formative evaluation is not exclusively dictated by whether the evaluator collects quantitative or qualitative data. Many people have the misperception that summative evaluation involves exclusively quantitative data and that qualitative data is used for formative evaluation. This is not always the case. Whether evaluation feedback is formative or summative depends on what type of information it is and when it is provided to the client (see Figure 1.1).
Figure 1.1 Formative and Summative Evaluation
img
Data for summative evaluation is collected for the purpose of measuring outcomes and how those outcomes relate to the overall judgment of the program and its success. As demonstrated in the vignette, summative findings are provided to the client at the end of the project or at the end of the project year or cycle. Typically, summative data includes such information as student scores on standardized measures—state assessments, intelligence tests, and content-area tests, for example. Surveys and qualitative data gathered through interviews with stakeholders may also serve as summative data if the questions or items are designed to elicit participant responses that summarize their perceptions of outcomes or experiences.
For example, an interview question that asks participants to discuss any academic or behavioral changes they have seen in students as a result of participating in an after-school program will gather summative information. This information would be reported in an end-of-year report. However, an interview question that asks stakeholders to discuss any improvements that could be made to the program to better assist students in reaching those intended outcomes will gather formative information.
Formative data is different from summative data in that rather than being collected from participants at the end of the project to measure outcomes, formative data is collected and reported back to project staff as the program is taking place. Data gathered for formative evaluation must be reported back to the client in a timely manner. There is little value in formative evaluation when the evaluator does not report such findings to the client until the project is over. Formative feedback can be given through the use of memos, presentations, or even phone calls. The important role of formative feedback is to identify and address the issues or serious problems in the project. Imagine if the evaluator in our vignette had not reported back formative findings concerning parent communication. How many students might not have been able to participate in the after-school activities? One of the evaluator’s tasks is to identify such program barriers, then inform program staff so that changes can occur. When programs are being implemented for the first time, formative feedback is especially important to developers and staff. Some programs require several years of intense formative feedback to get the kinks out before the program can become highly successful.
Formative feedback and the use of that information to change or improve the program constitute one factor that separates program evaluation from most applied research approaches. Classical experimental or quasi-experimental research approaches attempt to control for extraneous ...

Table of contents