1
Asking the Right Questions
Does education research have any impact on the instructional practices, curricula, and policies in your classroom, school, or district? Probably not, if you are like many educators we know. You may even secretly believe that your own common sense and experience are far more trustworthy than the experiments and observations of researchers. We all know individuals who wouldnât dream of buying a new car or choosing a treatment for a medical condition without researching the options. Yet on the job, they will commit hundreds of thousands of dollars of their schoolsâ or districtsâ budgets to an innovative or supposedly exemplary program without carefully evaluating the available research findings.
One elementary school principal explained the problem this way: âWe tend to move from one fad to another in order to demonstrate that we are âstate of the artâ even though most of the activities have little impact. There is big money in selling education programs and consultants use âresearch saysâ to sell programs that purportedly can fix just about anything. Most ⌠teachers and administrators canât differentiate viable research from poor researchâ (Walker, 1996, p. 41).
One can certainly understand why some practitioners dismiss education research as irrelevant to their daily lives and continue to âdo their own thing.â Even insiders concede that there are problems with it: poor research designs and sloppy statistics (Cook, 1999), divisive bickering (Gage, 1989; Snow, 2001), and petty politics (Shaker & Heilman, 2002). Others are more optimistic about the potential of education research to inform practice: âResearch is the most powerful instrument to improve student achievementâif only we would try it in a serious and sustained mannerâ (National Educational Research Policy and Priorities Board, 2000a, p. 1).
This statement serves as a challenge to both researchers and educators. Researchers have an obligation to produce useable knowledge for practitioners, but educators are no less accountable for applying what is already known to the practice of schooling. We bear an additional responsibility as wellâthat of holding publishers, curriculum developers, and consultants accountable for evaluating their products and models using rigorous research techniques and then making that research available to practitioners.
WHAT IS OUR APPROACH TO MAKING SENSE OF RESEARCH?
Before proceeding, it is important to clarify some fairly broad assumptions that we will make about research: (a) that one can frame a meaningful question related to educational practice, (b) that one can develop a hypothesis related to that question, (c) that one can design a studyâwhether quantitative, qualitative or ideally a skillful combination of bothâand collect data to assess the hypothesis, (d) that one can assess whether the data support the hypothesis with some degree of certainty (or uncertainty), and (e) that one can apply this knowledge, within reason, to inform decision making, whether at the classroom, school, district, state, or national level.1
Even if research appears to follow the preceding steps, one must be circumspect about accepting the âfactsâ that it purports to establish. In the manner of Sherlock Holmes, we have to sift through research evidence to determine what it is really saying. The issue of causality is a good example, one that receives particular emphasis in this book. Life in schoolsâand, indeed, life in generalâis rife with causal statements: âOur test scores went up because of the new reading program.â âTeachers are leaving the school in droves because of low salaries.â
If this book accomplishes nothing else, it aims to convince the reader that causal statements cannot be made in a cavalier fashion. Schools are complex and multifaceted. Causal links are difficult to establish with certainty, if only because there are usually alternativeâand sometimes equally plausibleâcauses for that which we seek to explain. For example, it may be that test scores went up not because of a new reading program, but because of an influx of well-to-do students or a sudden exclusion of low-achieving students from the testing. Or it may be that salaries, low as they are, have nothing whatever to do with the mass exodus of teachers. They are leaving because the principal makes their lives miserable. Although establishing clear causal links is a daunting assignment, the task is easier if one pays close attention to rules-of-thumb (to be introduced throughout the book) that have been developed and refined by generations of social scientists. This book will repeatedly emphasize that identifying âgoodâ research hinges on understandingâand critiquingâthe causal underpinnings of that research.
FIVE QUESTIONS ABOUT RESEARCH
Our district has tried numerous strategies: we lifted a school day; we increased time on-task; we increased the graduation requirement; we mandated exit testing; and we put in a no-driverâs-license-if-you-drop-out provision. Many other school boards have tried instituting similar enhancement policies. Locally we try to deal with attendance and discipline rules, but these measures alter the nature of the system without addressing the root causes of the problem. We have audited our rules for compliance purposes. What needs to be examined now is the unhappy consequence of these efforts: there have been no significant improvements in student achievement patterns. These innovations have failed to eliminate poor instruction and ineffective and redundant curricula. This raises the question of exactly what our professional roles are going to be to help more students become prepared for a new century. (Dorn, 1995, p. 7)
You can read between the lines of this lament by a Florida high school principal. His superintendent and school board no doubt issued a mandate: âRaise student achievement.â This is a tough assignment at the high school level, or at any other level for that matter. Principal Dorn and his staff seemingly tried every strategy, idea, and innovation they could think of and nothing has worked. His frustration is palpable as he raised a very critical question: âWhat exactly am I supposed to be doing as a principal?â
This book does notâit cannotâoffer a single answer to Dorn or to others that share his goal of improving schools. In fact, we are deeply skeptical of consultants, salespeople, and project leaders who purport to provide such answers. Rather, it suggests five questions that Dorn and educators like him should ask of research. By presenting these questions, we do not aim to waffle (thus evoking Harry Trumanâs plea for a one-armed economist, so that he might never hear âon the one handâ and âon the other handâ). The questions simply acknowledge two hard realities about education research.
First, all authors think that their research is âgoodâ and worthy of a receptive audience. How can the beleaguered practitioner separate the wheat from the chaff? Some quality-control mechanisms already exist, of course. There are academic journals with more rigorous quality standards than others (enforced by impartial and anonymous reviewers). A great deal of the worst research is never published at all. Yet even good journals publish studies with overstatements, misstatements, and downright fabrications. Further, the proliferation of sub-standard journals, self-published Web sites, and advocacy research organizations means that it is increasingly difficult not to find an outlet for publication. Education is full of well-intentioned, but occasionally ineffective, attempts to synthesize and communicate research findings (e.g., Berliner & Casanova, 1993; Zemelman, Daniels, & Hyde, 1998). At the end of the day, a practitionerâs common sense may have to be the final arbiter of what constitutes âgoodâ research. We firmly believe that every practitioner can become a more informed and critical consumer of research, if armed with the right questions.
The second hard reality is that most researchers have the relative luxury of not having to worry about the implementation of their findings in classrooms and schools. Unfortunately, research findings do not always translate easily to the ambiguities of educational practice. Thousands of victims of botched staff development or reforms gone haywire can attest to that. What works in one context may fail miserably in another. Thus, research cannot be analyzed solely on the basis of the hermetically sealed bubble in which it was conducted; it must be evaluated in light of the context in which the findings will be applied, whether a classroom, school, district, or state. Once again, practitioners are often in the best position to decide what might work for them, based on their informed reading of the research.
To aid practitioners in their quest for understanding, we offer five questions:
1. The causal question: Does it work?
2. The process question: How does it work?
3. The cost question: Is it worthwhile?
4. The usability question: Will it work for me?
5. The evaluation question: Is it working for me?
Apply the first four questions to research before you adopt or implement a program, method, or policy. For example, if you (and your team) are considering the implementation of âmultiple intelligencesâ as a way to organize curricula and design instruction, or contemplating the use of block scheduling as a means of raising achievement in your high school, seek out every available research studyâboth quantitative and qualitative. Read them carefully to determine if it (i.e., multiple intelligences or block scheduling) actually works, how it works, if itâs worthwhile, a...