
eBook - ePub
Performance Evaluation
Proven Approaches for Improving Program and Organizational Performance
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
Performance Evaluation
Proven Approaches for Improving Program and Organizational Performance
About this book
Performance Evaluation is a hands-on text for practitioners, researchers, educators, and students in how to use scientifically-based evaluations that are both rigorous and flexible. Author Ingrid Guerra-López, an internationally-known evaluation expert, introduces the foundations of evaluation and presents the most applicable models for the performance improvement field. Her book offers a wide variety of tools and techniques that have proven successful and is organized to illustrate evaluation in the context of continual performance improvement.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Performance Evaluation by Ingrid J. Guerra-López in PDF and/or ePUB format, as well as other popular books in Education & Evaluation & Assessment in Education. We have over one million books available in our catalogue for you to explore.
Information
PART 1
INTRODUCTION TO EVALUATION
CHAPTER 1
FOUNDATIONS OF EVALUATION
This chapter defines and describes evaluation and sets the frame for this book within the principles of performance improvement. Various kinds of evaluation, as well as some closely related processes, are differentiated from each other. The basic challenges that evaluators face are laid out, and the reason that stakeholder commitment is so important is examined. The benefits of evaluation to an organization are listed. Finally, definitions are provided for some key terms used throughout the book and in the evaluation field.
In our daily lives, we encounter decision points on an almost continuous basis: Should I do this, or should I do that? Should I go right or left? Should I take the highway or the back streets? Should I buy now or later? Should I take my umbrella today or not? Life in an organizational setting is no different: We face decisions about which programs to sustain, which to change, and which to abandon, to name but a few organizational dilemmas. How do members of an organization go about making sound decisions? With the use of relevant, reliable, and valid data, gathered through a sound evaluation process aligned with desired long-term outcomes.
Unfortunately, these data are not always available, and if they are, many decision makers do not know they exist, or do not have access to them, or do not know how to interpret and use them to make sound decisions that lead to improved program and organizational performance. In fact, Lee Cronbach (1980) and others have argued that decisions often emerge rather than being logically and methodically made.
Effective leaders are capable of making sound decisions based on sound data, and evaluators can do much to influence the leadership decision-making process. Evaluation can provide a systematic framework that aligns stakeholders, evaluation purposes, desired results and consequences, and all evaluation activities, so that the evaluation product is a responsive and clear recipe for improving performance. This in essence allows the decision-making process to become clearer and more straightforward. Evaluation is the mechanism that provides decision makers with feedback, whether through interim reports and meetings or a final report and debriefing.
A BRIEF OVERVIEW OF EVALUATION HISTORY
Michael Scriven (1991) describes evaluation as a practice that dates back to samurai sword evaluation. Another type of evaluation was in evidence as early as 2000 B.C.: Chinese officials held civil service examinations to measure the ability of individuals applying for government positions. And Socrates included verbal evaluations as part of his instructional approach (Fitzpatrick, Sanders, & Worthen, 2004).
In response to dissatisfaction with educational and social programs, a more formal educational evaluation can be traced back to Great Britain during the 1800s, when royal commissions were sent by the government to hear testimony from the various institutions. In the 1930s, Ralph Tyler issued a call to measure goal attainment with standardized criteria (Fitzpatrick et al., 2004). And during the 1960s, Scriven and Cronbach introduced formative (used to guide developmental activities) and summative (used to determine the overall value of a program or solution) evaluation, and Stufflebeam stressed outcomes (program results) over process (program activities and resources) (Liston, 1999).
In 1963, Cronbach published an important work, “Course Improvement Through Evaluation,” challenging educators to measure real learning rather than the passive mastery of facts. Moreover, he proposed the use of qualitative instruments, such as interviews and observations, to study outcomes. In the latter part of the 1960s, well-known evaluation figures such as Edward Suchman, Michael Scriven, Carol Weiss, Blaine Worthen, and James Sanders wrote the earliest texts on program evaluation.
In 1971, Daniel Stufflebeam proposed the CIPP model of evaluation, which he said would be more responsive to the needs of decision makers than earlier approaches to evaluation were. In that same year, Malcolm Provus proposed the discrepancy model of evaluation. In 1972, Scriven proposed goal-free evaluation in an effort to encourage evaluators to find unintended consequences. In 1975, Robert Stake provided responsive evaluation. In 1981, Egon Guba and Yvonna Lincoln proposed naturalistic evaluation on the basis of Stake’s work, feeding the debate between qualitative and quantitative methods (Fitzpatrick et al., 2004).
All of this was occurring in the context of a movement to account for the billions of dollars the U.S. government was spending on social, health, and educational programs (Fitzpatrick et al., 2004; Patton, 1997). In order to address a demand for accountability, those responsible for programs soon began to ask evaluators for advice on program improvement. Thus, the initial purpose of program evaluation was to judge the worthiness of programs for continued funding.
When Sputnik became the catalyst for improving the U.S. position in education, which was lagging compared to other countries, educational entities in particular began to commission evaluations, partly in order to document their achievements. The need for evaluators soon grew, and government responded by funding university programs in educational research and evaluation. In the 1970s and 1980s, evaluation grew as a field, with its applications expanding beyond government and educational settings to management and other areas. Evaluations are now conducted in many different settings using a variety of perspectives and methods.
EVALUATION: PURPOSE AND DEFINITION
While some rightly say that the fundamental purpose of evaluation is the determination of the worth or merit of a program or solution (Scriven, 1967), the ultimate purpose, and value, of determining this worth is in providing the information for making data-driven decisions that lead to improved performance of programs and organizations (Guerra-López, 2007a). The notion that evaluation’s most important purpose is not to prove but to improve was originally put forward by Egon Guba when he served on the Phi Delta Kappa National Study Committee on Evaluation around 1971 (Stufflebeam, 2003). This should be the foundation for all evaluation efforts, now and in the future. Every component of an evaluation must be aligned with the organization’s objectives and expectations and the decisions that will have to be made as a result of the evaluation findings. These decisions are essentially concerned with how to improve performance at all levels of the organization: internal deliverables, organizational gains, and public impact. At its core, evaluation is a simple concept:
- It compares results with expectations.
- It finds drivers and barriers to expected performance.
- It produces action plans for improving the programs and solutions being evaluated so that expected performance is achieved or maintained and organizational objectives and contributions can be realized (Guerra-López, 2007a).
Some approaches to evaluation do not focus on predetermined results or objectives, but the approach taken in this book is based on the premise of performance improvement. The underlying assumption is that organizations, whether they fully articulate this or not, expect specific results and contributions from programs and other solutions. As discussed in later chapters, this does not prevent the evaluator or performance improvement professional from employing means to help identify unanticipated results and consequences. The worth or merit of programs and solutions is then determined by whether they delivered the desired results, whether these results are worth having in the first place, and whether the benefits of these results outweigh their costs and unintended consequences.
An evaluation that asks and answers the right questions can be used not only to determine results but also to understand those results and to modify the evaluation so that it can better meet the intended objectives within the required criteria. This is useful not only to identify what went wrong or what could be better but also to identify what should be maintained. Through appreciative inquiry (Cooperrider & Srivastva, 1987), evaluation can help organizations identify what is going right. Appreciative inquiry is a process that searches for the best in organizations in order to find opportunities for performance improvement. Here too the efforts are but a means to the end of improving performance. Although the intentions of most evaluators are just that, the language and approach used are charged with assumptions that things are going wrong. For instance, the term problem solving implies from the start that something is wrong. Even if this assumption is not explicit in the general evaluation questions, it makes its way into data collection efforts. Naturally the parameters of what is asked will shape the information evaluators get back and, in turn, their findings and conclusions. If we ask what is wrong, the respondents will tell us. If...
Table of contents
- Cover
- Table of Contents
- Title
- Copyright
- PREFACE
- ACKNOWLEDGEMENTS
- THE AUTHOR
- PART 1: INTRODUCTION TO EVALUATION
- PART 2: MODELS OF EVALUATION
- PART 3: TOOLS AND TECHNIQUES OF EVALUATION
- PART 4: CONTINUAL IMPROVEMENT
- REFERENCES AND RELATED READINGS
- INDEX
- End User License Agreement