Evaluation Practice
eBook - ePub

Evaluation Practice

How To Do Good Evaluation Research In Work Settings

  1. 300 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Evaluation Practice

How To Do Good Evaluation Research In Work Settings

About this book

Evaluation Practice bridges the apparent gap between practice and research to present a logical, systematic model to guide all professional thinking and action within the context of everyday professional life. Their framework embraces diverse theories, action, and sets of evidence from a range of professional and disciplinary perspectives.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Evaluation Practice by Elizabeth DePoy,Stephen Gilson,Stephen French Gilson in PDF and/or ePUB format, as well as other popular books in Psychology & History & Theory in Psychology. We have over one million books available in our catalogue for you to explore.
SECTION 1
Beginnings
images
CHAPTER 1
Introduction to Evaluation Practice: A Problem Solving Approach through Informed Thinking and Action
In a recent evaluation project, we were asked to examine the process and outcomes of a grant competition. A state-wide grant-maker, which we will refer to as ā€œGrant-maker,ā€ concerned with citizen access to health care was not only interested in knowing the outcomes of the individual projects that it supported, but also wanted to examine the collective impact of the programs on health care access. We revisit this exemplar among many others in this chapter and throughout the book. As you begin this chapter, think of the complexity of the questions posed by Grant-maker and the values that inhere in them.
What is Evaluation Practice?
In this text, we discuss and illustrate ā€œevaluation practice.ā€ Unlike terms such as evaluation, evaluation of practice, or program evaluation, the term ā€œevaluation practiceā€ denotes a comprehensive approach to the integration of systematic appraisal with professional thinking and action. Because evaluation practice considers both anticipated and unanticipated results it expands beyond mere empirical examination of the process and outcome of professional work activity. Rather, we see ā€œevaluation practiceā€ as all of the following: (a) a framework that integrates evaluation thinking and action within professional thinking, action, and entities; (b) a means to examine and respond to articulated problems and issues; (c) a set of thinking and action strategies through which profession-specific theories and skills can be organized, examined, and verified; and (d) a systematic examination of explicit or implicit behavior change.
Let us take a closer look at each of the criteria.
(a) Evaluation practice as a framework that integrates evaluation thinking and action within professional thinking, action, and entities. A framework is a conceptual scaffold, so to speak, which constrains, provides direction, and structures a phenomenon (Dottin, 2001). The evaluation practice framework guides and structures a systematic process through which professionals themselves, or in concert with others, scrutinize the ā€œwhy, how, what, and the results of their own activity.ā€ Our model, therefore, holds the professional, not an ā€œexternal evaluator,ā€ accountable for systematic thinking and action, for careful examination of his/her practices, and for critical appraisal of the results of professional functioning. So who conducts evaluation? Educators, providers, policy makers, public health practitioners, technology experts, business specialists, and so forth: in other words, you do! That is not to say that you must conduct all systematic evaluation in isolation. Evaluation can be done by one individual, a team, in collaboration with others, or in consultation. However, in the evaluation practice model, professional thinking and action cannot be separated from systematic evaluation and, thus, it is the responsibility of all professionals to engage in evaluation practice.
By the term ā€œentitiesā€ we refer to the programs, interventions, materials, and so forth that are part of professional activity but do not comprise it in total. In the example of Grant-maker, an entity would be a grant competition program, a grant proposal proffered by an agency, materials and programs resulting from funding, or even a website, all of which are goal oriented.
(b) Evaluation practice as a means to examine and respond to articulated problems and issues. Unlike many evaluation approaches that begin with need, we suggest that evaluation begins with problem identification, which then forms the basis for all subsequent professional activity. Grant-maker missed this important point and invited us into the evaluation process after all funded projects were completed. The absence of a clear problem that Grant-maker was trying to minimize or eliminate was therefore not clear and had to be reconstructed. Without this understanding Grant-maker would not be able to ascertain if the individual and collective project results addressed the implicit problem of limited access to health care. As we discuss in detail in Chapter 3, problem identification is an essential thinking strategy as it clarifies, specifies, and lays bare what conditions are desirable and undesirable and thus what outcomes need to be accomplished to reduce or eliminate undesirable circumstance.
(c) Evaluation practice as a set of thinking and action strategies through which profession-specific theories and skills can be organized, examined, and verified. Our third point suggests that evaluation provides the forum in which to apply and test the efficacy of theory and skill, not just programmatic outcome. Thus, evaluation practice is a dynamic model for myriad professions, not a theory or method-specific framework. Thus, in evaluation practice, we not only assess professional action, but also test knowledge and contribute to its further development. For us, as you look in on your own activity, what you learn informs others as well as the knowledge base in your own field. As we address in more detail below, this third point is contentious.
(d) Evaluation practice as a systematic examination of explicit or implicit behavior change. Whether or not behavior change is explicitly named as the desired outcome of professional activity, we suggest that management of human behavior, or its causes, correlates, and consequences, is the goal of professional action and thus inheres in all evaluation practice activity. Let us consider Grant-maker to illustrate. Grant-maker funded diverse projects all designed to improve access to health care for all citizens. So whether the project was focused on the creation of a product such as software to promote access to electronic health information, a policy to guide alternative funding for underinsured citizens, a study to reveal attitudinal barriers to seeking health care, or even the creation of electronic medical records, all of Grant-maker’s funds in some way were designed to change human behavior. Expansion of electronic access to health information assumes that information is of value in prompting informed health behaviors. Policies mandating alternative funding for those who would otherwise not have the financial resources to seek care, ostensibly remove a cause of poor health and, thus, change professional and consumer behavior. Research investigating attitudinal barriers to seeking health care is applied to behavioral change that follows attitudinal shifts. And the creation of the entity of electronic medical records changes the behavior of health care practitioners, insurance professionals, and others involved in the health care enterprise. Moreover, inherent in Grant-maker’s overarching goal of improved access to health care is a change in health behaviors, and the causes, correlates, and consequences of limited access to care.
Historically, and even currently, as we suggested above, evaluation in most fields (McDavid and Hawthorne, 2006) has been taught and treated as separate and distinct from professional activity (Unrau et al., 2001). Some (Grinnell, 1999) suggest that evaluation is an end process to examine the extent to which a desired outcome has been achieved. Others (Patton, 2001; Rossi et al., 2004) describe evaluation as ongoing for the purpose of using feedback to improve practices. Look at the range of definitions of evaluation from current scholars:
• assesses if product or program was successfully developed and implemented according to its stakeholders’ needs (Lockee et al., 2001);
• professional judgment (McDavid and Hawthorne, 2005);
• systematic assessment of the worth or value of an object (Joint Committee on Standards for Educational Evaluation, 2003);
• systematic assessment of an object’s worth, merit, probity, feasibility, safety, significance, or equity (Stufflebeam and Shinkfield, 2007).
As you can see, one set of approaches to evaluation combines both outcome and process evaluation (Rossi et al., 2004) while others see evaluation as a set of methods to examine ā€œevaluation objects,ā€ in which the object is a program or person rather than an interactive set of processes and resources (Fitzpatrick et al., 2004). And finally, as the field of evaluation grows and diversifies, many evaluators have developed methods to examine parts of practice as they differentially relate to, and influence, intermediate and final outcome (Mertens, 2004). All of these models are presumed to exist apart from, and provide the empirical systematic structure for others to ā€œlook inā€ on, professional activity and entities (Berlin and Marsh, 1993). A consequence of the separation of evaluation and professional activity has been the mistrust and maligning of each camp by the other. On one extreme, professionals suggest that they do not need evaluation to support the efficacy of their work, and, on the other, evaluators assert that without empirical support, claims are not worthy of professional behavior (Berlin and Marsh, 1993). Yet, in the twenty-first century, trends favoring both professional expertise and accountability continue to expand. The two must find a way not only to co-exist, but also to complement one another (DePoy and Gitlin, 2005; Reed et al., 2006).
In concert with current thinking, we have written this text in proactive response to the dilemmas, challenges, and trends to integrate inquiry with practice. The chapters that follow provide the rationale, thinking, and action processes to bridge the gap between professional processes, outcomes and entities, and evaluation, and provide the skills and examples to guide both students and professionals in incorporating evaluation as an essential and omnipotent professional action. Consistent with this view, we formally name our approach evaluation practice and define it as comprising of the following elements:
• the purposive application of evidence-based, systematic thinking and action;
• processes and actions that define and clarify problems and identify what is needed to resolve them;
• examination of the way in which and extent to which problems have been resolved.
Does this definition remind you of any fields? Think of the words ā€œproblem,ā€ ā€œsystematic,ā€ ā€œlogic,ā€ and ā€œevidence.ā€ These are the foundations of research and practice thinking (DePoy and Gitlin, 2005; Hickey and Zuiker, 2003). It is, therefore, curious that despite the similar descriptors and characteristics, there remains a significant debate across fields regarding the distinction between evaluation and research (Alkin, 1990; Thyer, 2001), and most recently among research, evaluation, and professional practice (MacQueen and Buehler, 2004). At one end of the spectrum, some suggest that despite their empirically-based, systematic activity, neither evaluation nor professional practice qualifies as research or parts thereof because they focus on ā€œdoingā€ and on the assessment of the outcomes of ā€œdoingā€ rather than the production of generalizable knowledge (Fitzpatrick et al., 2004; Lockee et al., 2001).
In looking at the distinction among research, evaluation, and professional activity, scholars such as Weiss (1997) claim that evaluation and research are not distinct fields at all. She draws our attention to the role of evaluation in systematically examining professional activity as a basis for developing knowledge. Yet, others such as Merten (2004) and Fitzpatrick et al. (2004) hold the criterion of ā€œgeneralizabilityā€ as central to research and thus any activity that is context based cannot be considered as research without the presence of this essential characteristic. But let us think about Grant-maker. Grant-maker funds programs and research that meet the criterion of reducing disparities in access to health care. In Grant-maker’s guidance, all professional activity must be solidly anchored on well-established theory and must have a clear and comprehensive evaluative component that can be shared to inform access beyond the scope of the individual project itself. So, to a large extent, the ā€œprogramā€ and evaluation meet the criterion of generalizability. Moreover, all research proposals must be applied to Grant-maker’s priority of reducing access disparities. So, for Grant-maker, the distinctions among research, evaluation, and professional activity are not clear despite their efforts to separate competitions into the two categories of research and programming.
Further obfuscating the debate are the differences among research traditions and methodologies, and, even, within the same tradition, regarding generalizability. Not all approaches to research have generalizability as their goal and, even when they do, not all methods can achieve that goal (DePoy and Gitlin, 2005). Grant-maker funded two ethnographic studies of health care access, one examining elder access to mental health care in a small rural town, and the other examining access barriers to rural health services for recent African immigrants.
We transcend the debates by asserting that evaluation practice serves both research and professional action and bridges the gap among the three. Evaluation practice is grounded in the logic structures and the systematic thinking that undergird all evidence-based thinking processes. However, unlike other definitions of research that adhere to the essential element of generalizability (Rossi et al., 2004; Unrau et al., 2006), the purpose in evaluation practice is explicit, may or may not include generalizability, and is a major determinant of the scope and approach chosen by the inquirer. The centrality of political purpose is the element that, we suggest, sets evaluation as a distinct subset of research. And the thinking and action processes of the evaluation practice provide a reasoned, evidenced-based structure for practice thinking as well. Consider the example in Box 1.1 which refers to our opening exemplar.
Can you see how important purpose is in evaluation and how you might design the evaluation to meet the practice, administrative, and/or funding purposes? Both the individual grantee and our approach to evaluation described in Box 1.1 would give you valuable information, one looking at long-term outcomes and one looking at immediate controllable outcomes. The choice to examine immediate outcomes resulted in success and continued funding, sustaining, and ultimately expanding the individual grantee’s program.
Box 1.1 Grant-maker
To address Grant-maker’s questions, we first recreated the central problem statement that guided all of Grant-maker’s activity during the years that the evaluation covered. Through a process which we call ā€œproblem mappingā€ (see Chapter 3), we located the articulated or implicit problems in each proposal and, through the literature, looked for their relationships to disparities in access to health care. For example, one project proposed to disseminate information about Medicaid support to those who were potentially eligible, suggesting that a factor in limiting access was lack of information. The extent to which information was disseminated was the desired outcome of that proposal. We took the outcome one step further to look at the associations between information dissemination, information acquisition, and changes in access to health care. Clearly, the purpose of the original proposal and the purposes of our evaluation differed although they bore an important relationship that was at the heart of successful funding. But access to health care, while a desired long-term outcome, was neither the central pur...

Table of contents

  1. Cover
  2. Title
  3. Copyright
  4. Contents
  5. Preface
  6. Section 1: Beginnings
  7. Section 2: Thinking Processes of Evaluation Practice
  8. Section 3: Reflexive Action
  9. Section 4: During and After Professional Effort: Did You Resolve Your Problem, How Do You Know, and How Did You Share What You Know?
  10. Appendix: Data Analysis
  11. References
  12. Glossary/Index