applied research, independent variables, dependent variables, internal validity, experimental control, functional relation, evidence-based practice, reliability, threats to internal validity, nomothetic, baseline logic, ideographic, validity, history, maturation, testing, instrumentation, procedural infidelity, attrition, attrition bias, sampling bias, data instability, cyclical variability, multitreatment interference, regression to the mean, adaptation, Hawthorne effect
The goal of science is to advance knowledge. The process by which we advance knowledge is generally via research—the systematic investigation and manipulation of variables to identify associations and understand processes that occur in typical (non-research) contexts. Of course, research processes are limited; for example, outcomes of research studies have been reported to be non-replicable (Open Science Collaboration, 2015); to be dependent on counterfactual conditions (Lemons, Fuchs, Gilbert, & Fuchs, 2014); to fail to generalize to outside of research contexts, in applied or authentic settings (Spriggs, Gast, & Knight, 2016); and to be largely inapplicable to “real” problems faced by practitioners (Snow, 2014). How then does research contribute to the advancement of knowledge, and does it do so in a useful manner? In this chapter, we introduce the concepts of applied research and evidence-based practice, describe different levels of evidence based on research type, and explain three primary research approaches and their corresponding rationales and assumptions. We conclude the chapter by describing similarities and differences between research and practice.
If research is a set of processes by which we produce information about associations and processes of interest, what then is applied research? Basic research is concerned with the advancement of knowledge that may or may not have immediate and specific application to practical concerns. Applied research involves systematic investigation related to the pursuit of knowledge in practical realms or to solve real-world problems. For example, basic research might inform science related to the association of running and behavioral abnormalities in a mouse model of Down syndrome (Kida, Rabe, Walus, Albertini, & Golabek, 2013). Applied research might seek to identify interventions that result in improved physical activity for young children with Down syndrome (Adamo et al., 2015). Researchers and practitioners often seek to engage in applied research to not only add to the knowledge base for a specific topic, but also to improve outcomes of specific participants (researchers) or clients (practitioners). We refer to practitioners who engage in research as scientist-practitioners (a label coined by Barlow, Hayes & Nelson in 1984 to describe interventionists who make data-based decisions an integral part of their practice). In applied research, we are most interested in determining the relation between independent variables—the variables manipulated by researchers (i.e., interventions) and dependent variables—the variables we expect to change given the manipulation (i.e., target behaviors), to solve problems of clinical and educational practice.
Is it possible to incorporate scientific methodology into the daily routine of practitioners in schools, clinics, and the community? It is, but it’s not an easy task. Conducting applied research in authentic settings has the potential to advance science, to document changes in behavior, and to establish responsibility for the change. Before moving on to the research task itself, we would like to elaborate on the importance of these goals.
Advancement of Science
Through the work of Skinner and Bijou, a system of behavior analysis has been developed that includes a philosophy of behavior development, a general theory, methods for translating theory into practice, and a specific research methodology. This system was new in the scope of human evolution and the advancement of science. It has gained acceptance and verification through the successful application of concepts and principles. One general “test” of the system has been the demonstration of effectiveness in a variety of settings, in basic and applied applications. Applied behavioral analysis has been adopted and made an integral part of special and general education, speech language therapy, clinical and school psychology, neuropsychology, recreation therapy, adaptive physical education, and many other disciplines. Applied research, focused on specific problems of learning and reinforcement in schools, clinics, and communities, supports the advancement of science and knowledge in a given field while also making a direct impact on clients and consumers.
Not all practitioners may choose to be applied researchers, especially given the complexities of conducting applied research in authentic settings; however, most practitioners can contribute to the advancement of science and their discipline, by collaborating with those who do. Likewise, researchers and scientists can contribute to practice and enhance the applicability of their research by collaborating with practitioners. Eiserman and Behl (1992) addressed researcher- practitioner collaboration in their article describing how special educators could influence current best practice by opening their classrooms to researchers for the purpose of systematic research efforts. They pointed out the potential benefits of such collaborations, not the least of which was teachers becoming interested in conducting their own research and bridging the gap between research and practice (p. 12). More recently, Snow (2014) suggested educational research should include more collaboration with practitioners, to address applied problems. This position is not new, and that single case designs (SCDs) are particularly well suited to answer these applied problems has been acknowledged for decades (Barlow et al., 1984; Borg, 1981; Odom, 1988; Tawney & Gast, 1984). Encouragement of practitioner involvement in applied research efforts, as defined by Baer, Wolf, and Risley (1968, 1987), acknowledges their potential contribution by addressing “real” problems, which need to be addressed under “real” conditions, with available resources. It cannot be overstated that practitioners are often confronted with issues or problems overlooked by researchers. Thus, if practitioners collaborate with researchers, or acquire the skills to conduct their own research, they can generate answers to questions that will advance science for issues that are relevant to practice.
Advancement of Practice
Applied researchers in education, psychology, speech pathology, occupational therapy, and related fields have conducted experiments in controlled environments (lab schools, research institutes, private clinics, medical centers) by highly educated research professionals who have access to resources beyond those typically available. Research generated in such centers is important to advancing our understanding of human behavior and how to positively effect change, however, the extent to which effective interventions generalize to settings outside these “resource rich” and controlled environments needs to be shown. Thus, there are many research possibilities that the teacher/therapist-researcher can conduct in their classroom or community- based clinic that will add to our understanding on how to better serve those under their care.
Baer et al. (1987) addressed the need for applied researchers to determine the context with which interventions succeed and fail. When research is conducted under highly controlled conditions, as is often the case in studies using SCDs, the ability of those working in “typical” or “authentic” community settings to replicate conditions may be difficult, if not impossible. That is, interventions found to be effective in resource rich controlled settings may not be able to be carried out at the same level of fidelity, thus affecting the outcome of the intervention. It is important for applied researchers to identify the versatility and latitude of a particular intervention prior to advocating its use. In fact, through “failures to replicate” we seek out answers to “why?”, and with perseverance, identify modifications to the original intervention that result in the desired behavior change. Such discoveries are important to the advancement of practice in that our goal is for changes in behavior to generalize and maintain in natural environments. Through collaboration with applied researchers, the contribution made by teachers and therapists will increase the probability that instructional strategies and interventions under study will improve practice as delivered by other teachers and therapists working in community schools and clinics. Moreover, the cross-discipline emphasis on implementation science (Cook & Odom, 2013; Forman et al., 2013) has clearly established that the likely implementation of an intervention, given typical contexts and supports, is a critical component of studying evidence-based practices. The applied researcher who demonstrates positive changes in participants’ academic, adaptive, or social behavior, produces evidence of a benefit of the instructional process.
Empirical Verification of Behavior Change
Successful teachers and therapists must demonstrate that they can bring about positive behavior change in their students or clients. Practitioners should expect that increasingly informed parents and clients will ask for data on behavior change for meaningful outcomes, and then will ask for some verification that your efforts were responsible for that change. Advances in technology have made collecting, organizing, presenting, and sharing data increasingly accessible. Practitioners who use practices and collect data on client or student behavior can show behavior change that occurs over time; however, sometimes behavior change may be the result of other factors (e.g., additional services of which the practitioner was unaware). The utilization of experimental research designs, such as SCDs, allows the practitioner to go one step further—to show a causal link between his or her practices and the child’s behavior change. A study with adequate mechanisms for ensuring that outcomes are related to your intervention procedures rather than extraneous factors is said to have adequate internal validity. Studies with high levels of internal validity allow researchers to demonstrate experimental control—to show that the experimental procedures (intervention) and only the experimental procedures are responsible for behavior change. A researcher does this by carefully eliminating other potential explanations for behavior change; this concept will be discussed at length in later chapters. When experimental control is demonstrated, we have verified that there is a functional relation between the independent and dependent variables—that is, that the change in the dependent variable (behavior) is causally (functionally) related to the implementation of the independent variable.
At no time in history has accountability in education, psychology, behavior sciences, and related fields been more important. Recent guidelines in the Individuals with Disabilities Education Improvement Act (IDEIA) and the Every Student Succeeds Act (ESSA) mandate the use of evidence-based practice (alternately, “scientific, research-based intervention”; IDEIA; or “empirically supported practice”; Ayres, Lowrey, Douglas, & Sievers, 2011). Similarly, the American Psychological Association and the Behavior Analysis Certification Board have standards requiring the use of evidence-based interventions. Evidence-based practice refers to intervention procedures that have been scientifically verified as being effective for changing a specific behavior of interest, under given conditions, and for particular participants. Though the term is relatively new, the idea that research should guide practice is not, particularly in the field of applied behavior analysis. Baer et al. (1968) defined applied behavior analysis and emphasized the importance of quantitative research-based decisions for guiding practice. Their emphasis on a low-inference decision model, based on repeated measurement of behavior within the context of an SCD, set a standard for practitioners determining intervention effectiveness 50 years ago. At the time of their article, published in the inaugural issue of the Journal of Applied Behavior Analysis, there was no shortage of critics who questioned the viability and desirability of an empirical scientific approach for studying and understanding human behavior, a response in part due to the controversial position articulated by B.F. Skinner in his classic book, Science and Human Behavior (1953). Having passed the test of time, as evidenced by the numerous SCD studies that have influenced practice across many disciplines, it has been shown that a behavioral approach can and does provide a scientific framework for understanding and modifying behavior in positive ways. Few would question that Baer et al. (1968) established evidence-based practice as a core value for applied behavior analysts, a value that has yielded best and promising practices across numerous disciplines within the behavioral sciences. Current zeitgeist and standards continue this long-standing tradition for researchers and practitioners in a variety of fields.
What constitutes a “practice”? Horner et al. (2005) defined practice as it relates to education as “a curriculum, behavioral intervention, systems change, or educational approach designed to be used by families, educators, or students with the ...