
- 144 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
About this book
In recent years collaborative working has moved from being an optional extra of public service practice to being a core competency for individuals and teams. Has this led to better outcomes for those who use services? How to understand and to measure these outcomes and demonstrate these to a variety of different audiences remains a significant challenge for many. This revised edition of this bestselling textbook includes the latest research findings and contains more tools, frameworks and international examples of best practice to aid practitioners to more effectively evaluate partnerships. Up-to-date research evidence is presented in a practical and helpful way, making this an essential resource for students.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Evaluating Outcomes in Health and Social Care by Dickinson, Helen,O'Flynn, Janine in PDF and/or ePUB format, as well as other popular books in Social Sciences & Health Policy. We have over one million books available in our catalogue for you to explore.
Information
1
What are evaluation and outcomes, and why do they matter?
Evaluation is often considered to be a rather specialist and technical term, but we all engage in evaluation activities on a daily basis. At its most basic level evaluation may be considered the ‘process of determining the merit, worth or value of something, or the product of that process’ (Scriven, 1991, p 139). In deciding what car or cornflakes to buy we are making a comparative judgement about the worth or merit of the different cars or cornflakes available based on the information we have access to. Usually we are looking to get best value for the money we spend, or to find the product or service that is most suited to our needs and tastes.
We don’t only make judgements over the worth or merit of products and services that we are personally involved with purchasing, however. Whether it is reports of taxpayers’ money being ‘wasted’ through private-financed hospitals, large-scale procurements of computer systems for various public services, or re-branding services such as the Highways Agency, not a day goes by when there is not some report in the media over the alleged misuse of tax-funded services, organisations or products. The TaxPayers’ Alliance (2014) goes as far as to estimate that £120 billion of taxpayers’ money in the UK is ‘wasted’ annually – at least in terms of their evaluation. In a context of austerity and dramatic reductions to public spending budgets, if correct, this is a significant amount of money. Yet such conclusions are derived on the basis of a series of judgements and assumptions about the way the world is and should be.Typically, we do not evaluate tax-funded services simply to make sure that they are providing value for money on a cost basis. We also want to make sure that individuals using these services receive high quality services and products. Although choice holds a prominent place in the healthcare agenda (at least rhetorically), realistically, many of us in the past have typically had little choice over from whom or where we receive public services, and would expect all public services to offer the same high standards. Moreover, individuals with complex or chronic conditions may not be able to either actively judge the quality of services they receive, or have little to compare it to. Such services need to be evaluated to ensure that individuals have access to quality services that they want and need. Therefore it is essential that we systematically assess services, and ensure that public services are effective, efficient and delivered in line with the preferences and needs of users.
Collaborative working has assumed a prominent position within public policy not only in the UK but also more widely throughout the developed world. Writing from an Australian perspective, O’Flynn (2009, p 112) argues that ‘a cult of collaboration’ has emerged as this concept has become ‘du jour in Australian policy circles.’ Similar sorts of arguments have been made in the US, continental Europe and a range of other jurisdictions (Haynes, 2015). Often the rhetoric driving the enthusiasm for collaboration relates to the provision of better services for those who use them, and an aspiration to ‘create joined-up solutions to joined-up problems’. This argument has been further supported by a series of high-profile cases (some of which were indicated in Box 0.1) where the inability to work effectively in partnership has been presented as a major source of failure, which can have very real, negative consequences for individuals.
As McCray and Ward (2003) and others have suggested, collaboration often appears as a ‘self-evident truth’, yet it has still not been unequivocally demonstrated that working jointly improves outcomes for individuals who use public services. Despite the huge amount of time and money that has gone into working collaboratively in health and social care and evaluating the resultant impact, there is still a distinct lack of empirical evidence, particularly in terms of service user outcomes. This might be considered problematic in itself (given that collaborative working has assumed a central role in many areas of public policy), but in a context where governments across the UK have argued for the importance of evidence-based policy and practice, it might be considered even more remiss. Evaluation of the outcomes of collaborative working therefore remains an imperative, if not overdue, task. Yet, as we will see during the course of this text, this is often far from an easy process.This book is the revised edition of the original text. In the eight years since initial publication, the literatures on evaluation and collaboration have grown substantially, and yet many of the questions that were unanswered in the earlier edition remain so today. There have, however, been some significant steps forward in terms of the degree to which outcomes are understood and accepted as important measures across the fields of health and social care, and also in terms of the sophistication of evaluation approaches. Although we still lack definitive data concerning the impacts of collaborative working, a patchwork of evidence is emerging that fills in some of these gaps.
This edition has been updated in terms of the policy context and the evidence nationally and internationally, as well as receiving a complete overhaul in terms of hot topics and emerging issues, frameworks and tools. Our intention is that this should provide relevant up-to-date background information and the tools needed for individuals and teams seeking to evaluate outcomes in collaborative settings.
This chapter explores the health and social care literature to provide practical definitions of key terms in order to help readers think through the types of impacts that health and social care organisations may have for those who use their services and the ways in which we might evaluate this. One of the challenges in this field is that much of the language of evaluation and impact will likely be familiar to most of us and is in common use across many activities in our lives. While this familiarity is in some senses helpful, it can also be limiting, as when used in a context of systematic and scientific evaluation, meanings are typically more specific than everyday usage affords.
The chapter summarises the evolution of health and social care evaluation, and progressions within the field from an interest in inputs and outputs to more quality-based measures associated with outcomes. We also provide an overview of the current political context and interest in evidence-based policy/practice and outcomes, and the associated implications these hold for performance management, accountability and inspection.
Evaluation
As already suggested, evaluation is a broad concept. Within the social sciences it has been described as a family of research methods which involves the:
… systematic application of social research procedures in assessing the conceptualisation and design, implementation, and utility of social intervention programs. In other words, evaluation research involves the use of social research methodologies to judge and to improve the planning, monitoring, effectiveness, and efficiency of health, education, welfare, and other human service programs. (Rossi and Freeman, 1985, p 19)
The systematic part of this definition is important because this tends to differentiate these evaluative approaches from the types of judgements that we make in our everyday lives (for example, about brands of cornflakes or what car to buy). In our everyday life we will typically draw on available information or may, with bigger or more considered decisions, seek out particular sources of data. The science of evaluation typically goes beyond this, considering what precisely is being evaluated, the information needed to do this, and carefully selects methods to collect and analyse information (Lazenbatt, 2002).Describing evaluation as a ‘family of research methods’ means that this activity may take many different forms, depending on the type of project and the aims of the evaluation. These varied approaches have often grown out of different academic disciplines or backgrounds and are underpinned by different sets of assumptions. An overview of some of the main types of evaluations that you may encounter within health and social care are set out in Box 1.1. Although presented here as separate types, in reality evaluation can incorporate several of these dimensions within the same project. Theory-led approaches (see Chapter 3), in particular, may be both formative and summative, evaluating process(es) and outcome(s). Some of these different types are more suited to particular stages of a programme to capture particular activities, and Box 1.1 provides an example of this (see also Figure 1.1).
Box 1.1: Common evaluation types used in health and social care
• Feasibility evaluation aims to appraise the possible effects of a programme before it has been implemented. That is, it aims to uncover all the possible consequences and costs of a particular proposed action before it has actually been implemented.
• Process evaluation typically looks at the ‘processes’ that go on within the service or programme that is being evaluated. Process evaluations normally help internal and external stakeholders to understand the way in which a programme operates, rather than what it produces.
• Outcome or impact evaluation assesses the outcomes or wider impacts of a programme against the programme’s goals. An outcome evaluation may be a part of a summative evaluation (see below), but would not be part of a process evaluation.
• Summative evaluation tends to be used to help inform decision-makers to decide whether to continue a particular programme or policy. In essence, the aim of this type of research tends to concentrate on outputs and outcomes in order to ‘sum up’ or give an assessment of the effects and efficiency of a programme.
• Formative evaluation differs from summative evaluation in that it is more developmental in nature. It is used to give feedback to the individuals who are able to make changes to a programme so that it can be improved. Formative evaluations are interested in the processes that go on within a programme, but they also look at outcomes and outputs, and use this information to feed back into this process.
• Implementation evaluation assesses the degree to which a programme was implemented. Usually this involves being compared to a model of an intended programme, and analysing the degree to which it differs from its intended purposes.
• Economic evaluation aims to establish the efficiency of an intervention by looking at the relationship between costs and benefits. Not all the costs encountered within this approach are necessarily ‘monetary’-based, and branches such as welfare economics consider ‘opportunity costs’ from a societal perspective (for further details, see Raftery, 1998).
• Pluralistic evaluation attempts to answer some of the critiques of outcome or impact evaluations that are thought to value some views of what constitutes success over others. Often evaluations take their referents of success from the most powerful stakeholders (often the funders of evaluations), which could potentially ignore other (perhaps more valid, in the case of service users?) perspectives. Pluralistic evaluation investigates the different views of what success is and the extent to which a programme was a success.
One explanation for why there are so many different types of evaluation approaches is that there is a range of different reasons why we evaluate. In thinking about measuring performance, Behn (2003) notes eight different reasons why we might wish to understand how teams or organisations are doing (see Box 1.2). These different reasons have implications in terms of the types of approaches we would likely adopt, and the data that would be needed to evaluate performance.
Figure 1.1: Focus of different types o...
Table of contents
- Cover
- Title Page
- Copyright
- Contents
- List of tables, figures and boxes
- Acknowledgements
- List of abbreviations
- Preface
- 1: What are evaluation and outcomes, and why do they matter?
- 2: What does research tell us?
- 3: Hot topics and emerging issues
- 4: Useful frameworks and concepts
- 5: Recommendations for policy and practice
- References