Taming Randomized Controlled Trials in Education
eBook - ePub

Taming Randomized Controlled Trials in Education

Exploring Key Claims, Issues and Debates

  1. 238 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Taming Randomized Controlled Trials in Education

Exploring Key Claims, Issues and Debates

About this book

There is a recent surge in the use of randomized controlled trials (RCTs) within education globally, with disproportionate claims being made about what they show, 'what works', and what constitutes the best 'evidence'. Drawing on up-to-date scholarship from across the world, Taming Randomized Controlled Trials in Education critically addresses the increased use of RCTs in education, exploring their benefits, limits and cautions, and ultimately questioning the prominence given to them.

While acknowledging that randomized controlled trials do have some place in education, the book nevertheless argues that this place should be limited. Drawing together all arguments for and against RCTs in a comprehensive and easily accessible single volume, the book also adds new perspectives and insights to the conversation; crucially, the book considers the limits of their usefulness and applicability in education, raising a range of largely unexplored concerns about their use. Chapters include discussions on:

  • The impact of complexity theory and chaos theory.
  • Design issues and sampling in randomized controlled trials.
  • Learning from clinical trials.
  • Data analysis in randomized controlled trials.
  • Reporting, evaluating and generalizing from randomized controlled trials.

Considering key issues in understanding and interrogating research evidence, this book is ideal reading for all students on Research Methods modules, as well as those interested in undertaking and reviewing research in the field of education.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Taming Randomized Controlled Trials in Education by Keith Morrison in PDF and/or ePUB format, as well as other popular books in Computer Science & Data Processing. We have over one million books available in our catalogue for you to explore.

Information

Part I

Setting the scene for randomized controlled trials in education

Part I sets the context for considering RCTs in education, locating them within the evidence-based movement and the ‘what works’ agenda that has been sweeping across the world for over two decades, i.e. sufficient time for its wrinkles, conditions and problems to be identified and addressed. The argument presented is that, when considering the value of RCTs in education, we should rid ourselves of the belief that ‘what works’ is straightforward and unproblematic. Rather, the opposite is the case. ‘What works’ is not only a matter of empirical demonstration; it is a deliberative, value-laden, value-rich, value-saturated matter, and it is unlikely that all parties involved will agree as to whether something ‘works’ or does not ‘work’. Indeed, the very definitions of ‘works’ and ‘evidence’ are contested and unclear.
Chapter 1 opens up the field by raising an initial set of questions in considering what constitutes acceptable evidence and what constitutes ‘works’ in the ‘what works’ debate. These indicate that ‘what works’ contains important sub-questions and sub-issues, and that, even if satisfactory answers can be provided to these questions, this is still insufficient, as predictability, transferability, generalizability and trustworthiness are questionable.
Chapter 2 identifies definitional problems in ‘what works’, what is ‘evidence’, what exactly is the ‘what’ in ‘what works’, how do we ‘know’ or judge if something ‘works’ or does not ‘work’, and what constitutes acceptable evidence. The chapter argues that evidence is not neutral; rather it is that which is brought forward to make a case beyond a reasonable doubt. The chapter draws on legal analogies in presenting evidence, and sets out demanding criteria for evidence that can be applied to ‘evidence’ in education. This, it is argued, enables rigour to be demonstrated in judging ‘what works’ and what are appropriate indicators of this. In turn, this raises challenges for educators trying to disentangle ‘what works’ in a complex, multivariate, multidimensional world, and assessing and evaluating ‘what works’ in a way that is faithful to such complexity. The problem is compounded because almost anything has been shown by research evidence to ‘work’, so the user of ‘evidence’ has to be able to discriminate between different quality in, and uses of, research findings. This returns to issues raised in Chapter 1, of the need to address many questions in deciding whether something ‘works’, as there is no single, universal yardstick for objective measures. Whether something ‘works’ depends on who is judging and on whose evidence, in what circumstances and conditions, and so on; it is a human activity, not a mechanistic formula. Whilst the power and integrity of evidence are important, the chapter opens up a vast array of questions and concerns about a wider range of topics and elements concerning research evidence.
Chapter 3 sets the context for discussions of causality in RCTs, and, in doing so, makes a case for requiring a much more nuanced, complex, cautious and less naïve approach to understanding causality than RCTs provide. To the argument that the strength of RCTs is precisely because they show causality, other factors having been controlled out or simply over-ridden, the chapter argues that this claim is not as simple as RCTs would have it be, and that causality as espoused in RCTs is much more problematic and uncertain, not least in mistaking as ‘noise’ what is, in fact, part of the ‘signal’.
Having set the scene for considerations of ‘evidence’ and its limitations, ‘what works’ and questions against its apparent simplicity, and the importance of attending to causality, Part II moves specifically to RCTs in education.

Chapter 1

Questioning evidence of ‘what works’ in educational research

Introduction and overview

This chapter argues that:
  • Findings concerning ‘what works’ are equivocal rather than certain, often lacking predictability and generalizability.
  • RCTs are only one of a vast range of types, methodologies and methods of research in education, and that to elevate them above others is misconceived when one applies the ‘fitness for purpose’ criterion.
  • Defining ‘evidence’, ‘what works’, what is the ‘what’ in ‘what works’, and what ‘works’ means, is open to very different interpretations.
  • Understanding these different interpretations raises many questions that call out all-too-easy definitions and answers.
  • A sober, critical, sceptical view of ‘evidence’ and ‘what works’ is a caution against over-simplistic assertions of what research shows and what can be taken from research in education.
This lays the ground for interrogating the appeal of RCTs in Part II.

The limits of what ‘research shows’

‘The research said that this would work. I tried it. It didn’t.’
It would be hard to justify not having evidence inform what we do in education. We may think that what we do is the best way, but relying on intuition and experience may be insufficient; we may be recycling poor practice whilst earnestly believing that it is good practice because we have been doing it for years. As Cain (2019) remarks, reliable research is better than alternatives such as trial and error, or, indeed personal hunches (p. 10). In a climate of accountability, practices should be informed by evidence rather than its lack. Indeed, Didau (2015), Gorard et al. (2018) and Major and Higgins (2019) note that many practices are not informed by evidence and continue in spite of evidence of their ineffectiveness and harm.
The move towards evidence-based education in judging ‘what works’ appears unstoppable. However, what constitutes evidence, and evidence of ‘what’ is not always clear. There is no ‘one size fits all’ in considering what kind of evidence is important, nor how it is gained and used. It may come from a survey, a test, an observation, an RCT, from the views and wisdom of acknowledged experts, and so on. Fitness for purpose is paramount, and evidence must be actionable and useful. This book, whilst applauding the moves made towards promoting evidence-informed education in principle, argues that, in the ‘what works’ agenda, considerable caution must be exercised in considering the nature and trustworthiness of the evidence, in making the connection between ‘evidence’ and ‘what works’, and in moving from evidence to practice. And evidence is only one element in the ‘what works’ agenda, nor does it provide conclusive, eternal and incontestable truths, but it helps. Research-informed decision making, practice and policy are surely better than their non-informed counterparts.
We have to give the lie to the emphasis placed on large-scale, putatively disinterested and objective ‘evidence’ and the privileging of RCTs as fitting ‘evidence’ for ‘what works’ as the sole or main path to salvation in improving education. This book calls out those whose preoccupation with certain kinds of ‘evidence’ renders them partially sighted or blind to the benefits of other kinds of evidence in yielding truths in a complex world or whose all-too-easy dismissal of values-based teaching and the professional wisdom of experienced practitioners is accompanied by a reliance on ‘evidence’ whose basis is shaky, non-generalizable and subject to personal, selective preference. For sure, RCTs have their place, but there is no reason why they should sit alone on the throne of what is considered to be suitable evidence.
Moves towards a ‘what works’ agenda in education, which is intended to be evidence-based and/or evidence-informed, has been sweeping across the world almost without hindrance. It is the new orthodoxy, not least as it serves so many agendas. It purports to have a benevolent intent, improving education and avoiding relying on untried interventions in education; it furthers outcomes-based approaches, accountability and performance metrics; and it seeks to draw on ‘best evidence’ and research. Evidence-informed practice uses the best evidence to achieve desired goals or outcomes and, indeed, to prevent undesirable outcomes.
This immediately opens up the debate, as the term ‘best’ is not an empirical matter; it is a normative, moral and ethical matter, requiring judgement, statements of values and deliberation. It moves beyond pragmatics.
That education should be informed by cumulative and progressive research is surely beyond question. Like medicine, the ongoing accumulation of research evidence can make great strides forward in improving practice. On the other hand, evidence-based practice has received criticism for: misrepresenting the nature of the debate on what schools and educational institutions should be doing; neglecting the inclusion of values in, purposes of, and justifications for education and its decision making; narrowing the curriculum to that which is tested; making contestable assumptions about transferability; and over-simplifying what is, essentially, a highly complex, variable-dense, multi-faceted, multi-layered and multifactorial situation in classrooms and schools. Evidence-based practice has been criticized for its amoral, pragmatist approach to education, for over-simplifying educational discourses on how to improve education, for adopting too narrow, even singular, an approach to what constitutes ‘evidence’, for accepting all too easily what count as research findings, for overstating the generalizability of research findings, and for excluding a multitude of factors in addressing the ‘what works’ agenda. It has done little to close the gap between research and practice, and between research and policy making.
Whilst it would be difficult to support the view that educational practice should not be evidence-informed, and whilst ‘what works’ should be a worthy goal of education, it is the brand or type of ‘what works’ and ‘evidence’ that is often considered to be important. High quality research and evidence are essential – of the essence – if we are to ensure that the often once-in-a-lifetime experience of education is to be maximized. But ‘evidence’ is a slippery term, as it includes more than empirical data and Shakespeare’s ‘ocular proof’ of observed phenomena; rather it engages issues of worth, values, morals, purposes, justifications, opinions, judgements, contestation and questioning, and emanating from many sources. It requires the status, credibility, rigour, scope, worth and applicability of ‘evidence’ and ‘what works’ to be interrogated. ‘What works’ is as much a matter of values and judgement as it is of empirical outcomes; it is a normative, not simply an empirical matter (Sanderson, 2010).
Even though the agendas and time frames of researchers and policy makers often collide rather than coincide, or research bears little direct relation or relevance to policy formation and decision making, nevertheless policy should be expertly informed. Governments and policy makers are charged with the responsibility of examining issues in depth. In an age of evidence-based everything, educationists have a right to have policy decisions informed by more than ideology. Answers to questions such as ‘what evidence?’, ‘evidence of what?’ and ‘whose evidence?’ are essential. The claim that evidence shows such-and-such is almost always questionable, as it is not always clear what, exactly, the evidence is ‘evidence of’, and this raises issues of validity and fairness. Solution-focused and strategic policy making, in all areas of educational policy making, should be informed by the best evidence available. Ask yourselves: ‘is it happening?’
Best practice in education should be informed by evidence. Research evidence is a key means of updating and benchmarking practice, improving practice, for practitioners of all types and persuasions, not simply for a coterie of academics or like-minded educationists. High quality research should make a difference; it should open minds. As with policy making, educational practice should be informed by the best evidence available. Again, ask yourselves: ‘is it happening?’
The reader wishing to find out ‘what works’ is all-too-easily swamped by materials from a range of organizations with a benevolent intent in helping educationists and practitioners in many spheres of education to use ‘evidence’ in promoting best practice; a worthy intention. For example: the What Works Clearinghouse; the Education Endowment Foundation together with its Teaching and Learning Toolkit; the Campbell Collaboration; the Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre); the Comprehensive School Reform Quality Center; the Best Evidence Encyclopedia; the Coalition for Evidence-Based Policy; the What Works initiative and What Works Centres; the Evidence Based Education organization; the York University Centre for Reviews and Dissemination; the Alliance for Useful Evidence (Nesta); the Centre for Evidence Informed Policy and Practice; and countless systematic reviews, research syntheses and meta-analyses, some of which date back well before the advent of the ‘what works’ movement (e.g. the journal Review of Educational Research).
However, as Nesta (2016) remarks: evidence ‘rarely speaks for itself’ (p. 4); it is mediated and aggregated through a host of sources, parties and affiliations. Hence this book is cautionary. We risk all too easily slipping into simplistic, if attractive, conclusions about ‘what works’ and what constitutes usable ‘evidence’. Rather than rushing headlong into accepting ‘evidence’ as indicating ‘what works’, it is important, for safeguarding high quality education, to address a range of questions, and the list is long, for example:
Definitions
  1. What does ‘works’ mean?
  2. What is the ‘what’ in ‘what works’?
  3. What constitutes ‘evidence’?
  4. Whose evidence?
  5. Is an opinion ‘evidence’, and, if so whose opinions count?
  6. What is ‘good evidence’ for ‘what works’?
  7. Compared with what do we judge if something ‘works’? Is it any better or worse than other ‘treatments’/methods?
Validity and reliability
  1. Evidence of what, exactly?
  2. Given that ‘what works’ should be judged in terms of the stated purposes of a project or intervention, how sensible or possible is it to separate out those purposes from the whole gamut of purposes of education, intended or not, that are served by a particular intervention?
  3. How to address the complexity, multi-dimensionality and multi-valency of constituents of defining ‘what works’?
  4. How can we be sure that something works every time or most of the time?
  5. When is evidence enough to be deemed secure?
  6. Is evidence, per se enough on which to base decisions about what to do?
  7. What to do with research that shows that something ‘works’ sometimes but not always?
  8. How secure are the findings? Have they been corroborated?
  9. Over how many occasions and contexts must something ‘work’ for it to be claimed that it ‘works’ (e.g. a joke works well once but dies if repeated; a student may obtain a fluke high or low score once)? When is ‘enough’ really enough?
  10. When does something ‘work’, and for how long must it work before it is deemed to be successful?
  11. After how long must something ‘work’?
  12. When to assess whether something ‘works’ (assessing in too short a time or too long a time can bring unreliability or invalidity)?
  13. What variables and factors were not included in the research, hence were not controlled (e.g. teacher enthusiasm and expertise)?
  14. What do the terms used in research mean to different participants and readers (e.g. ‘direct instruction’, ‘collaborative learning’, ‘cognitive demand’), often being very general in nature?
  15. What significance is accorded to the concepts and practices in question, as these vary from culture to culture and context to context (‘significance’ is not the same as ‘presence’, e.g. the ‘Big Five’ personality traits may be important in one culture but may be unimportant in another culture, even if they are present)?
  16. How acceptable, useful or naïve is it to reduce the dynamic interpersonal complexity of teaching and learning to a single number (e.g. effect size) and to invest so much in a single figure or a ‘yes’ or ‘no’ (‘yes, it works’; ‘no, it doesn’t’)?
  17. How valid and reliable are proxy variables for matters which are not directly observable (e.g. intelligence; understanding; learning)?
  18. What kinds of data and methodologies are required to understand ‘what works’?
  19. What are the limits and possibilities of different methodologies and methods in providing useful research evidence of ‘what works’?
Judgements and conditionality
  1. Under what conditions does something ‘work’, ‘not work’ etc.?
  2. In whose terms is ‘what works’ being judged (‘what works’ in one person’s eyes does not work in another person’s eyes)? There is no one version or judgement of ‘what works’.
  3. How to take account of the point that ‘what works’ is a matter of judgement rather than data, and that this judgement is imbued with moral, values-related and ethical co...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Dedication
  6. Table of Contents
  7. List of illustrations
  8. Acknowledgements
  9. Introduction
  10. PART I: Setting the scene for randomized controlled trials in education
  11. PART II: Randomized controlled trials
  12. References
  13. Index