
- 192 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
The Myth of Research-Based Policy and Practice
About this book
Martyn Hammersley?s provocative new text interrogates the complex relationship between research, policymaking and practice, against the background of the evidence-based practice movement. Addressing a series of probing questions, this book reflects on the challenge posed by the idea that social research can directly serve policymaking and practice.
Key questions explored include:
- Is scientific research evidence-based?
- What counts as evidence for evidence-based practice?
- Is social measurement possible, and is it necessary?
- What are the criteria by which qualitative research should be judged?
The book also discusses the case for action research, the nature of systematic reviews, proposals for interpretive reviews, and the process of qualitative synthesis.
Highly readable and undeniably relevant, this book is a valuable resource for both academics and professionals involved with research.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access The Myth of Research-Based Policy and Practice by Martyn Hammersley in PDF and/or ePUB format, as well as other popular books in Sciences sociales & Recherche scientifique et méthodologie. We have over one million books available in our catalogue for you to explore.
Information
1
SOME QUESTIONS ABOUT EVIDENCE-BASED PRACTICE
There is an initial problem with the notion of evidence-based practice which needs to be dealt with. This is that its name is a slogan whose rhetorical effect is to discredit opposition. After all, who would argue that practice should not be based on evidence (Shahar 1997: 110)? In the context of medicine, Fowler commented that the implication of the term seems to be that in the past, practice was based ‘on a direct communication with God or the tossing of a coin’ (Fowler 1995: 838). So there is an implication built into the phrase ‘evidence-based practice’ that opposition to it can only be irrational.1
Over time, critics did manage to counter the rhetoric by denying that practice can be based solely upon research evidence, forcing advocates to change ‘evidence-based’ to ‘evidence-informed’ practice.2 At face value, this suggests a more reasonable view of the relationship between research and practice. Yet it is at odds with the radical role initially ascribed to research by the evidence-based practice movement. As a result, albeit in a different way, we have a label that systematically obscures the grounds on which there might be reasonable disagreement with what is proposed. Given this, I will retain the ‘evidence-based’ label here.
In political terms, as a way of mobilising support, the use of such opposition-excluding labels is no doubt highly effective. But it is a poor basis for rational discussion about the major issues that the notion of evidence-based practice raises. Against this background, it is very important to emphasise that one can believe that research evidence is of value for practice without accepting much of what travels under the heading of ‘evidence-based practice’, indeed while rejecting substantial parts of it. So, I take it as given that, on the whole and in the long run, practice would be improved if practitioners were more familiar with the results of research. And there is also, no doubt, scope for more directly policy- and practice-relevant social research. Nevertheless, I believe that there are serious problems with the ideas put forward by the evidence-based practice movement in their initial radical form.
The problems I will discuss here include its privileging of research evidence over other considerations in the decisions of policymakers and practitioners, and of a particular kind of research evidence at that; the assumptions made about the nature of professional practice and about the ‘transmission’ of evidence to practitioners; and the connections between calls for evidence-based practice and managerialism in the public sector.
THE PRIVILEGING OF RESEARCH EVIDENCE
The central claim of the evidence-based practice movement was that research, of a particular kind, can make a very significant contribution to improving the effectiveness of policymaking and practice across many fields. Thus, the term ‘evidence’ was interpreted in a highly restricted way. In their introduction to What Works? Evidence-Based Policy and Practice in the Public Services, Davies et al. comment: ‘the presumption in this book is that evidence takes the form of “research”, broadly defined. That is, evidence comprises the results of “systematic investigation towards increasing the sum of knowledge”’ (Davies et al. 2000: 3). From this point of view, evidence can only come from research, and certainly not from practice itself. Furthermore, as we saw, in the original, classical version of evidence-based practice, only one particular kind of research was trusted as a valid source of evidence: that produced by randomised controlled trials (RCTs) and systematic reviews of their findings.
The transfer of this approach from medicine to the field of social policy and practice largely ignored the history of evaluation research in social science. This had begun, in the 1960s, with advocacy of methods that were similar to the RCT. However, the weaknesses of such an approach soon came to be recognised, and a variety of alternative strategies were developed (Norris 1990; Pawson and Tilley 1997: ch. 1). These included qualitative approaches, which were promoted on the grounds that they could take account of the negotiated trajectory of implementation and the diverse interpretations of policies and programmes among stakeholders, as well as of the unanticipated consequences that more focused quantitative evaluations often missed.
The idea that RCTs can make a major contribution to improving practice stems to a large extent from the assumption that they are systematic, rigorous, and objective in character. By contrast, any ‘evidence’ derived from professional experience is portrayed as unsystematic – since it reflects the idiosyncratic set of ‘cases’ with which a practitioner has happened to come into contact – and also as lacking in rigour – in that it is not built up in an explicit, methodical way but rather through an at least partially unreflective process of sedimentation. Indeed, frequently this contrast is presented in the form of caricature. Thus Oakley (2000: 17) claims that the generalisations of ‘well-intentioned and skilful doctors ... may be fanciful rather than factual while Greenhalgh (1997: 4) draws a contrast between evidence-based decision-making and ‘decision-making by anecdote’; and Evans and Benefield (2001: 531) suggest that previously, even when health practitioners used research evidence, they relied upon ‘idiosyncratic interpretations of idiosyncratic selections of the available evidence (rather than objective interpretation of all the evidence) Others have portrayed practice as relying upon ‘tradition, prejudice, dogma, and ideology’ (Cox quoted in Hargreaves 1996: 7–8), ‘fad and fashion’ (Slavin 2002: 16) or ‘theory’ (Chalmers 2005). These caricatures complement the already-mentioned rhetorical sleight-of-hand built into the very name of evidence-based practice, and further undermine the scope for reasonable discussion of the important issues involved.
The view of the role of research characteristic of the evidence-based practice movement fits with an Enlightenment-inspired political philosophy that portrays itself as opposing ‘forces of conservatism’, forces that are taken to represent entrenched interests. For example, Oakley claims that the medical profession, along with the pharmaceutical industry, have ‘a vested interest in women’s ill-health – in defining women as sick when they may not be, and in prescribing medical remedies when they may not be needed’ (Oakley 2000: 51). These interests are seen as disguised and protected by the claim of professionals to a kind of expertise that cannot be communicated or shared with lay people, but which instead demands professional autonomy and public trust.
What all this makes clear is that, in some significant respects, the evidence-based practice movement is anti-professional: it challenges the claims of professional practitioners – whether doctors, teachers, social workers, police officers etc. – to be able to make expert judgements on the basis of their experience and local knowledge. Instead, it is argued that what is good practice can only be determined through research.
Of course, it has long been argued that a distinctive feature of professions is that they operate on the basis of a body of scientific knowledge. In the case of medicine this was taken to be the corpus of knowledge made up of anatomy, physiology etc.; and one of the reasons why schoolteaching and social work were regarded as, at best, ‘semi-professions’ (Etzioni 1969) was that they did not have any equivalent ‘knowledge base’ produced by research.3 However, the evidence-based medicine movement argued that even medical practice was not sufficiently research-based because clinical decision-making remained heavily dependent upon the experience and judgement of the individual practitioner. In the case of the semi-professions, of course, the notion of evidence-based practice identifies a type of scientific knowledge on which they can base practice, but it does so in a way that undermines claims to autonomous conduct grounded in the need for experienced practitioners to exercise judgement about what it is best to do in particular cases. Indeed, it appears to presage a routinisation of occupational work that parallels that which has already taken place in manufacturing and lower-level white collar work (Braverman 1974; Holbeche 2012).
We need to look much more carefully both at the features of research-based knowledge, as compared to those of knowledge deriving from practical experience, and at how research findings relate to professional practice. It is important to recognise that research knowledge is always fallible, even if it is more likely to be valid than knowledge from other sources. Thus, we are not faced with a contrast between Knowledge, whose epistemic status is certain, and mere Opinion, whose validity is zero or totally unknown.4 Furthermore, research knowledge usually takes the form of generalisations, of one sort or another, and interpreting the implications of these for dealing with particular cases is rarely straightforward.
Another important point is that factual knowledge is not a sufficient determinant of good practice. One reason for this is that it cannot determine what the ends of good practice should be or even, on its own, what are and are not appropriate means. These matters necessarily rely upon judgements, in which value assumptions play just as much of a role as factual ones. Furthermore, the effectiveness of any practical action usually depends not just on what is done but also on how it is done and when. Skill and timing can be crucial.
For these reasons, there are substantial limitations on what research can offer to policymaking and practice. This is not to argue that it can offer nothing, but rather to caution against excessive claims about its contribution.
THE NATURE OF PROFESSIONAL PRACTICE
Equally important is that a misleading conception of the nature of professional practice is built into some advocacy of evidence-based practice. In effect, it is assumed that it should take the form of pursuing explicitly stated goals (or ‘targets’), selecting strategies for achieving them on the basis of objective evidence about their effectiveness, with the outcomes then being measured in order to assess their degree of success (thereby providing the knowledge required for improving future performance).5
This rationalistic model is not wholly inaccurate or undesirable, but it is defective in important respects. Forms of practice will vary in the degree to which they can usefully be made to approximate it, and it probably does not fit any sort of professional activity very closely. There are several reasons for this. One is that most forms of practice involve multiple goals that have to be pursued more or less simultaneously; that these goals cannot be fully specified in a way that avoids reliance upon professional judgement; that the same action will often have multiple consequences, some desirable and others less so; that these will be differentially distributed across clients; that there is frequently uncertainty surrounding the likely consequences of many strategies; and that the situations being faced by practical actors are always unique, and generally undergo recurrent change, requiring continual adaptation.
As a result of these features, there can often be reasonable disagreement about what would be an improvement, and about what sorts of improvement are to be preferred, as well as about how these can best be achieved. And these are not matters that research findings can resolve, at least not on their own. Moreover, sometimes it will simply not be sensible to engage in elaborate explication of goals, to consider all possible alternatives, to engage in an extensive search for information about the relative effectiveness of various strategies, as against relying upon judgements based on experience about this, or to try to measure outcomes. Given other demands, and the likely low success of attempting to do these things in many cases, it will often not be worthwhile. The rationalistic model underpinning the notion of evidence-based practice tends to underplay the extent to which in many circumstances the only reasonable option is trial and error, or even ‘muddling through’ (Lindblom 1979), a point that applies as much to forms of occupational practice as it does to policymaking.
In this context, we should note that the very phrase ‘what works’, which the evidence-based practice movement sees as the proper focus for much social research, implies a view of practice as technical: as open to ‘objective’ assessment in terms of what is and is not effective, or what is more and what less effective. I do not want to deny that effectiveness, and even efficiency, are relevant considerations in professional practice, but the information necessary to judge them in the ‘objective’ way proposed will rarely be available, given the difficulties and costs involved. And, as we have seen, any such assessment cannot be separated from value judgements about desirable ends and appropriate means – not without missing a great deal that is important.
It is also essential to recognise that there is a significant difference between medicine and other fields in terms of the nature of professional practice. For whatever reason, much medicine is closer to the technical end of the spectrum, in the sense that there is less diversity in the goals and other considerations treated as relevant, and thereby in evaluative criteria. In addition, there seems to be more scope for identifying relatively simple causal relationships between treatment and outcome in this field than elsewhere. Of course, it is possible to exaggerate these differences. Davies claims that ‘medicine and health care ... face very similar, if not identical, problems of complexity, context-specificity, measurement, and causation’ to the other fields where the notion of evidence-based practice has been promoted (Davies 1999: 112). It is certainly true that there are such problems in medicine; and that in some areas, for example mental health, they are very similar in scale and character to those faced in non-medical fields. However, there are significant differences in this respect between some areas of medicine and the fields in which most social science operates. While what we are dealing with here is only a general difference in degree, it is still a substantial difference.
In short, in my view research usually cannot supply what the notion of evidence-based practice demands of it – specific and highly reliable answers to questions about what ‘works’ and what does not – and professional practice cannot be governed by research findings – because it necessarily relies upon multiple values, tacit judgement, local knowledge, and skills. Moreover, this is especially true in fields outside of medicine. When pressed, advocates of evidence-based practice often concede one or other, or both, of these points. Yet these points undermine the claim that rigorous research, and a reformed version of professional practice that gives more attention to research findings, will lead to a sharp improvement in professional performance and outcomes; and this was the key rationale for evidence-based practice in the first place.
THE TRANSMISSION OF RESEARCH FINDINGS
It is a central assumption of the evidence-based practice movement that research findings need to be presented to lay audiences via reviews of all the relevant and reliable studies, rather than through the findings of each study being disseminated separately. This is sensible, and an important corrective to current pressures on researchers to maximize the impact of individual studies. However, there are questions to be raised about the particular form of literature review promoted by the evidence-based practice movement, namely ‘systematic reviews’.
The concept of systematic review shares some common elements with the notion of evidence-based practice more generally. It portrays the task of reviewing the literature as reducible to explicit procedures that can be replicated; in the same way that advocates of evidence-based practice see professional work as properly governed by explicit rules based upon research evidence. For example, the task of assessing the validity of research findings is portrayed as if this could be done properly simply by applying explicit and standard criteria relating to research design. Yet, validity assessment cannot rely entirely upon information about research design. Much depends upon the nature of the knowledge claims made, and assessment of them always relies upon substantive knowledge as well as on specifically methodological considerations (see Chapter 6). The result of this mistaken approach to validity assessment is that systematic reviews are likely to exclude or downplay some kinds of study that may be illuminating, especially qualitative work, while giving weight to other studies whose findings are open to serious question (see Chapter 8).
An illustration of the attitude underlying the notion of systematic review, and the evidence-based practice movement generally, is Oakley’s recent adoption of wh...
Table of contents
- Cover Page
- Title Page
- Copyright Page
- Contents
- Acknowledgements
- About the author
- Introduction
- 1 Some questions about evidence-based practice
- 2 The myth of research-based policymaking and practice
- 3 Is scientific research evidence-based?
- 4 What counts as evidence for evidence-based practice?
- 5 Is social measurement possible, and is it necessary?
- 6 The question of quality in qualitative research
- 7 Action research: a contradiction in terms?
- 8 On ‘systematic’ reviews of research literatures
- 9 Systematic or unsystematic, is that the question? Some reflections on the science, art and politics of reviewing
- 10 The interpretive attack on the traditional review
- 11 What is qualitative synthesis and why do it?
- References
- Name index
- Subject index