1 Research Methods in Learning Design and Technology
A Historical Perspective of the Last 40 Years
Enilda Romero-Hall
This book serves to combine knowledge related to research methodologies in the instructional design and technology (IDT) field. It will address questions such as: How have our research methodologies evolved? What are the methodologies that can be used to investigate traditional and new research environments? How can we apply innovative research methodologies to address questions related to learning, design, and technology? This book will provide IDT scholars with a solid foundation of the different methods that can be taken to investigate a research problem. This knowledge aids researchers in the understanding of the rationale for the application of specific procedures or techniques used to identify, select, process, and analyze information applied to understand a research question.
As researchers, in the IDT field, our research interests, learning environments, types of learners, challenges and experiences, and the form of content is changing. We need to prepare ourselves with a research toolkit that allows us to properly investigate this new educational landscape. However, before reviewing the different research methodologies presented in this book, it is important to provide some historical context of the evolution of educational research and, more specifically, research methods in the instructional design & technology field.
The 1980s: Positivistic versus Post-positivistic Views
In 1982, researchers discussed predictions for the future of educational research in the next decade. The predictions from educational researchers included: a) the potential rise of better research syntheses thanks to the âmeta-analysisâ method, b) wishes to focus less on the classification of humans and more on understanding central variables, processes, and concepts in the teaching and learning of students, c) the desire to have more ethics in which trade-offs and weakness of research settings are fully disclosed as standard practice, d) predictions involving the emergence of the âfield of instructional psychologyâ and its focus, and e) hopes for improved training of research workers with an emphasis on the history of education, the philosophical issues that concern educators, methodologies of instruction, and the structure of educational systems (Farley, 1982). One particular prediction that Benjamin Bloom mentioned brought attention to the isolation and privacy of our research processes and dissemination (Farley, 1982). Bloom discussed that in other fields, such as physical and biological sciences, research efforts are more intensive. Knowledge among researchers and the work that they are doing is widespread and academics are working at the edge of the field. Whereas in education we have difficulty even knowing where the edges of the field are. Bloom predicted and suggested that education scholars should use group processes to probe more deeply into underlying issues and strategies (Farley, 1982). If you are familiar with concerns and issues with educational research, it will be clear to you that all of these predictions for the 1980s are still hopes and desires for research processes today.
Discussions in educational research that were often documented in the 1980s were related to the persistence of positivistic views and post-positivistic thoughts on educational research (Phillips, 1983), which in turn led to publications about the dogmas of our field (Howe, 1985). Positivistic philosophers a) were rigid believers that the scientific method could be applied to human affairs including the study of morals, b) had great hostility toward metaphysics, and c) adopted the verifiability principle which stated that something was meaningful if and only if it was empirically verifiable (Phillips, 1983). There were claims that positivism had died by the 1980s and that new and revolutionary accounts of the nature of science had emerged. However, it is well documented that many researchers felt that the legacy of positivism had not gone far enough (Howe, 1985; Phillips, 1983; Smith, 1983).
As Howe (1985) stated: âeducational research had remained largely untouched by the epistemological advances that brought about the repudiation of logical positivism.â Dogmas ingrained in educational research were argued and explored: quantitative versus qualitative and facts versus value (Howe, 1985; Smith, 1983). These dogmas and the beliefs of educational researchers led to serious and heated arguments, in which name-calling was not uncommon (i.e., ânumber crunchers versus storytellersâ). During the 1980s various researchers published to share the perspectives on the dogmas previously stated and clarify perspectives (Howe, 1985; Smith, 1983). They hoped to dismantle assumptions about different research methods and focus researchers more on âwhat worksâ given the goal of the investigation.
To solidify the argument that moving away from positivist epistemological views was critical for educational researchers, Cziko (1989) shared in an essay several lines of reasonings related to the unpredictability of human behaviors and the importance of description and interpretation of educational phenomena. The factors guiding the unpredictability of human behaviors, according to Cziko (1989) included: individual differences, chaos, the evolutionary nature of learning and development, the role of consciousness and free will in human behavior, and the implications of quantum mechanics. These reasons and positions regarding educational research continued to evolve from the traditional views of research as experimentation versus the more realistic understanding of the complexities of educational environments (Brown, 1992; Schoenfeld, 1992b). This was not only a shift in epistemological views influencing methodology, but it was also a shift from laboratory research to classroom settings (Romero-Hall, Hsiao, & Gao, 2018).
The 1990s: Immerse and Investigate in the Real World of Teaching and Learning
By 1992, researchers in education, specifically in the learning science fields, were posing the question: âWhat do you do when the perspective, paradigms, and methods that you know fail to provide adequate explanations of the things you need to understand?â (Schoenfeld, 1992b). It was clear by then that educational research was evolving; researchers were full of desire to immerse and investigate in the real world of teaching and learning. In a special issue of the Journal of the Learning Sciences titled âResearch Methods in and for the Learning Sciences,â guest-edited by Alan Schoenfeld, researchers discussed new methods used to explore different situations, behaviors, and learning experiences (Brown, 1992; Saxe, 1992; Schoenfeld, 1992a). This special issue provided insights into the true complexities of research post-positivist times, beyond just qualitative versus quantitative. Brown (1992) shared the intricacies engineering innovative learning experience in inner-city classrooms while also understanding how to investigate learning communities and interpretation amongst learners. For Brown (1992) the methodological issues in this type of complex intervention studies included: a) cross-fertilization between the classroom and laboratory settings to enrich understanding the developmental phenomenon in question, 2) the decision between idiographic (single variable in many subjects) versus nomothetic (the throughout study of individual cases) approaches, and c) the data selection. Schoenfeld (1992a) examined methodological issues related to the analysis of videotapes of problem-solving sessions. Using two case studies, Schoenfeld (1992a) described âwrestling with the dataâ as well as issues related to theory building, validity, and reliability, among others. Last, Saxe (1992) described in detail a framework that he developed to better understand childrenâs learning in cultural practices and educational activities. This framework was created due to the need to capture and analyze the dynamics of cognitive development in a unique community in ways that went beyond the traditional approach such as structural-development traditions of Jean Piaget and Heinz Werner (Saxe, 1992). What Saxe (1992) found was that traditional methods of researching cognitive development ignored âthe interplay between unique sociocultural histories and the constructive cognitive activities of individualsâ (p. 216).
The Early 2000s: We are More than Controlled Experiments
History teaches us that, to understand learners and their experiences (cognitive, physical, and emotional), researchers needed to evolve from traditional ideologies of research to a more intricate research method that carefully considers âcontext.â For learning design and technology researchers this became particularly critical as complete, complex, and interactive learning environments became tokens of interest for researchers in the educational technology field. By 2002, research in the educational technology field had explored content (instructional design), format (message design), and interactions (simulations) in learning experiences (Winn, 2002). According to Winn (2002), the new age of research in educational technology would focus on learning environments. He specifically predicted a focal point on a) artificial learning environments, b) inscriptional systems, c) social aspects of learning, and d) distributed cognition in learning communities. For Winn (2002) this change in topics of research in the educational technology field meant that researchers also had to change their research questions and methods by âadjusting to the demands of studying increasingly more complex interactions between students and their environmentsâ (p. 347). Winn (2002) stated that design experiments could be successful at determining effectiveness; however, just like Brown (1992), he believed that experiments can also distort the view of a setting where variables cannot be controlled.
In response to Winn (2002), Mayer (2003) wrote a seminal piece to clarify that recommendations for practice should be evidence-based using empirical research (ranging from controlled experiments to systematic observation in natural contexts). Mayer (2003) also expressed âresearch should be issue-driven, not method driveâ (p. 362). Mayer (2003) highlighted the importance of this type of methodology by stating: âControlled experiments offer an unsurpassed methodology for establishing the effectiveness of educational programs. The application of experimental methods to behavioral research is perhaps one of the greatest scientific advances of the twentieth centuryâ (pp. 362â363). The idea that controlled experiments should be disregarded or criticized, as Winn (2002) had expressed previously, was troubling to Mayer.
This kind of back-and-forward argument about research methods in educational technology sets the stage for our academic discourse and the use of various research methodologies. Researchers in the educational technology field were encouraged to adequately consider the research question and engage in their investigations using the most appropriate method at their disposal (Hannafin, Orrill, Kim, & Kim, 2005; Reeves, Herrington, & Oliver, 2005; Richey & Klein, 2005; Ross, Morrison, & Lowther, 2005; Savenye & Robinson, 2005; Winn, 2003). In a special issue of the Journal of Computing in Higher Education, Ross et al. (2005) discussed extensively the different types of experimental designs, its advantages/disadvantages, and the potential threats to the internal validity of the experiment. Ross et al. (2005), just like Mayer (2003), expressed that experimental methods on educational technology are a powerful way of determining what works in education. However, in the same special issue, Reeves et al. (2005) encouraged education faculty to consider a design research approach. Some of the reasons for the use of design research included: a) its focus on a broad-based, complex problem, b) it integrates known and hypothetical design principles to render plausible solutions, c) it consists of rigorous and reflective inquiry to test and refine learning environments, d) it involves the continual refinement of protocols and questions, and there is a commitment to theory construction and explanation, among other reasons (Reeves et al., 2005). Others, such as Richey and Klein (2005) provided insights into the importance of developmental research, which seeks to create knowledge grounded on data systematically derived from practice. In particular, developmental research addresses areas such as product design and development, product evaluation, validation of tool or technique, model development, model use, and model validation. âIt is research that is intricately connected to real-world practiceâ (Richey & Klein, 2005) and due to its complexity was construed of a mixed-method approach that could involve: case study, survey research, expert review, experimental research, interviews, replication, observations, document analysis, and others. Last, in support of post-positivist views in educational research and the use of qualitative methods in the educational technology research, Savenye and Robinson (2005) provided a comprehensive overview on the appropriate use of qualitative methods, its characteristics, the various types of qualitative methodologies, and assumptions that are often made about these types of research methods. Although extensive discussions about quantitative and qualitative comparisons had already occurred many years ago (Howe, 1985; Phillips, 1983), Savenye and Robinson (2005) aimed to share some of the challenges and opportunities that researchers should consider when conducting qualitative research. Even after years of discussions related to the importance of post-positivist views in educational research, Savenye and Robinson (2005) reiterate that researchers conducting qualitative inquiry faced challenges related to âproof of rigorâ through generalizability, validity, and reliability imposed by those who strive to treat qualitative research with an experimental research mindset.
Perhaps one of the most devastating aspects of this revolving door of discussion about adequate use of research methods and the different paradigms was the skepticism that it created towards educational technology, its validity, and its impact in education settings (Hannafin et al., 2005). By 2005, despite thousands of investigations on the topic of educational technology, definite answers remained disturbingly elusive about the impact of technology to support individualization, improve efficiency, and improved access to learning experiences (Hannafin et al., 2005). According to Hannafin et al. (2005), contributions to the yielding of so many contradictory findings in educational technology literature included: âthe emphasis on comparison research, emphasis on laboratory research, differences in problem framing and research methodology, bias toward statistically significant effects and the primacy of individual disciplinesâ (p. 8). Recommendations to elevate educational technology research and its endeavors went far beyond specific methodologies. Instead, to improve research quality he proposed a framework for educational research that considered a deeper understanding of levels of research and considerations of use. Similar recommendations were given by Klein, Martin, Tutty, and Su (2005) who during the same time period published a study focused on the research methods, processes, and issues taught to instructional design and technology graduate students. The findings from the investigation revealed that instructional design and technology graduate students were taught: a) a range of quantitative and qualitative methodologies in research courses and b) typical steps for how to conduct a research study. However, the results of the study also revealed that learning about the professional context of rese...