Web-Based Learning
eBook - ePub

Web-Based Learning

Theory, Research, and Practice

Harold F. O'Neil, Ray S. Perez, Harold F. O'Neil, Ray S. Perez

Share book
  1. 448 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Web-Based Learning

Theory, Research, and Practice

Harold F. O'Neil, Ray S. Perez, Harold F. O'Neil, Ray S. Perez

Book details
Book preview
Table of contents
Citations

About This Book

Web-Based Learning: Theory, Research, and Practice explores the state of the art in the research and use of technology in education and training from a learning perspective. This edited book is divided into three major sections:
*Policy, Practice, and Implementation Issues -- an overview of policy issues, as well as tools and designs to facilitate implementation of Web-based learning;
*Theory and Research Issues -- a look at theoretical foundations of current and future Web-based learning; the section also includes empirical studies of Web-based learning; and
*Summary and Conclusions -- highlights key issues in each chapter and outlines a research and development agenda.Within this framework the book addresses several important issues, including: the primacy of learning as a focus for technology; the need to integrate technology with high standards and content expectations; the paucity of and need to support the development of technology-based curriculum and tools; the need to integrate assessment in technology and improve assessment through the use of technology; and the need for theory-driven research and evaluation studies to increase our knowledge and efficacy. Web-Based Learning is designed for professionals and graduate students in the educational technology, human performance, assessment and evaluation, vocational/technical, and educational psychology communities.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Web-Based Learning an online PDF/ePUB?
Yes, you can access Web-Based Learning by Harold F. O'Neil, Ray S. Perez, Harold F. O'Neil, Ray S. Perez in PDF and/or ePUB format, as well as other popular books in Education & Education General. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2013
ISBN
9781134811656
Edition
1
Part I
POLICY, PRACTICE, AND IMPLEMENTATION ISSUES

Chapter 1

Evaluating Web-Based Learning Environments

Eva L. Baker
UCLA/National Center for Research on Evaluation, Standards, and Student Testing (CRESST)
Harold F. O'Neil
University of Southern California/CRESST
Despite best efforts, technology-based innovations seem to have persistently avoided significant, innovative evaluation. Standard approaches, perhaps spiced with a Web-site survey of satisfaction (Kirkpatrick, 1994; Sugrue & Kim, 2004), dominate evaluation for those cases where evaluation rises to attention. Why is this so? Part of the problem is the speed of technology development, and the difficulty of conducting evaluations of learning as deadlines loom. But there may be other reasons, related to tradition, novelty, unrealized claims, or the obviousness of a good idea, that inhibit evaluation of Web-based learning environments. Web systems have not yet been caught up in the wave of results-based activity that has hit hard the schools, business, and the military, in spite of the availability of several versions of evaluation or accountability standards (e.g., Baker & Linn, 2004; Joint Committee on Standards for Educational Evaluation, 1981; Stufflebeam, 2004). This chapter is intended to describe Web-based evaluation, how it could work, and at what points of entry the process may begin. Two preliminaries are required: First, we delimit the realm of Web-based learning so it is sensible to describe its evaluation approaches; second, we clarify the meanings we ascribe (but which vary by community) to common evaluation terms.

WHAT COUNTS AS WEB-BASED LEARNING?

When “Web-based learning” is used, a range of environments may come to mind. In Table 1.1, we list nine somewhat overlapping conceptions of Web-based learning, varying from “traditional” or more formal uses to environments where learning is incidental to achieving other goals. One frequent type of Web-based learning is that focused on formal courses. In this case, the term is sometimes used synonymously with distance learning, where a course (either academic or professional development) is wholly or significantly resident on a Web site, and where particular course objectives are partially or entirely intended to be met by sequenced instructional interventions. A second variation is blended courses, any of which may have varying degrees of instruction provided online with a significant component of personal, teacher, classroom, or peer support. A third form of Web-based learning involves the provision of course support materials, feedback, and opportunities for interaction by distance, but with the majority of instruction taking place in face-to-face, unmediated environments. Such is the case, for instance, in many university courses. A fourth form involves isolated units of instruction, where the majority of the course is offered in its traditional live form, but there is a particular component intended for practice or enrichment that is available on the Web. These four types of distance learning are used in business, in the military, and at postsecondary as well as in elementary or secondary schools. A fifth type (Web-based experiences, in contrast to more formal, purpose-driven uses) may also take place in formal school-like settings, but possesses more diffuse goals. For example, permitting children to play with drawing programs, matching games, or voluntary choices of software may have the consequence of meeting general goals of computer literacy (using a mouse, starting, stopping, finding one's folders), as well as supporting incidental learning inherent in the program.
TABLE 1.1
Nine Types of Web-Based Learning Experiences
1. Formal course or module of distance learning—goal focused and wholly delivered through a distributed network. Place and time of instruction partially unconstrained.
2. Blended course—goal focused, core instructional delivery and interaction is shared by live and computer-supported instruction. Some synchronous instruction required.
3. Technology-supported courses—course materials, assignments, chat and other features are available to augment a traditional live teacher, but the balance is on live instruction
4. Technology-enriched environments—practice opportunities or simulations particularly for subtasks are provided by the Web. Most instruction is live.
5. Discretionary Web activity—enrichment or other activities supporting computer literacy skills.
6. Tool use—learning that occurs related to the use of interactive tools involving search, document preparation, and spreadsheet and database design and collaborative work.
7. Focused games and simulations—goal focused or goal emergent with a set of learning expectations including content, strategy, and persistence.
8. Exploratory games and simulations—goal-focused, emergent, and unpredictable learnings occur; processes outcomes with opportunities to investigate relationships among procedures, constraints, and processes.
9. Domain specific incidental learning—relevant to learning rules and rewards of using (usually) commercial sites.
A sixth variation of Web-based learning occurs with the use of tools that may serve both formal and informal goals. Students’ use of word processing, browsers, spreadsheets, and the like may be motivated by particular assignments but may also provide practice in fluency of use of computer software. Strategies for search, planning, and feedback in addition to the content addressed are often supported by tools.
A seventh, important Web-based approach falls under the growing use of games and simulations to provide complex practice environments or to teach specific planned goals. These games, developed for commercial distribution, may involve role playing, strategy planning and execution, and collaboration (or conflict) with other users. Games, which are highly motivational, often include competitive components and almost unlimited paths, and require significant inferences to be made about the environment. The simulation component creates lifelike stimuli and complexity for learning. The eighth approach involves less goal-oriented games and simulations, where the lesson is to acquire particular processes so that the learner has complex understanding of the processes needed for success. Frequently the learner is encouraged to explore the effects of modifying variables, or the simulation gives the learner even greater opportunity to design the circumstances in which he is involved.
A ninth type of Web-based learning occurs in the process of using systems intended to accomplish ends other than learning. Informal or instrumental learning follows from an eBay user's experiences (learning when to bid, how to check the seller's credentials, the social expectations of that community), and learners may be rewarded or punished by the consequences (forgetting to ask if “new” meant “seconds” or to check the cost of the shipping). The myriad opportunities for e-commerce or e-information bring with them the fluency with particular procedures, driven by desires to accomplish specific ends (e.g., buy a computer). Evaluation here is through self-assessment—Did I get what I thought? and did I pay more than the others?

Terms Used in the Domain

Within a particular community, it is assumed that technical terms have similar meaning. For example, in the education world, for almost 40 years, summative evaluation (Scriven, 1967) has signified judgments made as a basis for comparative decisions, to choose a course of action for competitive interventions. The term “formative evaluation” has a slightly broader interpretation, including reviews of data related to interim or desired outcomes, arrangements of settings and instructional sequences, and achievement of different groups. Nonetheless, the core meaning remains that coined by Susan Meyer Markle, also many years ago (1967), that is, developmental testing, (i.e., testing in the process of developing a program, system, or intervention), whose purpose is to improve the functioning and impact of instruction.
Formative testing, or formative assessment, is a more recent entry (Black & Wiliam, 1998; Pellegrino, Chudowsky, & Glaser, 2001) and refers to the use of interactions of teacher and student to make judgments about student understanding and the next useful learning experience. In technical systems, Mislevy, Steinberg, Breyer, Almond, and Johnson (1999), following on the work of artificial intelligence-supported tutors (Anderson, 1983; Corbett, Koedinger, & Anderson, 1997), described the updating of a student model as formative. Student models are the sum of inferences made about an individual's learning, based on his or her responses to tasks, tests, and other program-based information. These models may be based on theoretical progress toward an expert's level of attainment, documented paths that have led to different levels of success (Vendlinski & Stevens, 2000), or probability estimates related to a network of student responses (Mislevy et al., 1999).
Variations also involve the use of terms intended to mean the measurement of achievement. Testing and assessment are almost synonymous in education, with the nuance that testing has a harder edge and a sometimes more standardized connotation. Similarly, achievement and performance are used interchangeably in education, with the nuance that performance may connote constructed or demonstrated learning, including physical skill. Evaluation in education is used to describe judgments of status about programs, institutions, and individuals for the purpose of improvement (formative) or decisions (summative). The notion of research—randomized field trials of interventions—is one notion of evaluation (Cook & Campbell, 1979; Freeman & Sherwood, 1970). Although randomized field trials are the gold standard of “research” design (as has been true at least since the days of R. A. Fisher—see Fisher, 1951), current usage focuses on decisions to be made, rather than on conclusions to be drawn, a distinction made in a landmark volume by Cronbach and Suppes (1969), and on interventions rather than on operationalized variables, much like the early days of evaluation (Freeman & Sherwood). In practice, differences in usage of terms can create confusion. For instance, the term “performance” in some military settings not only signals the “doing” of tasks, but also specifies their setting—on-the-job. It would follow, to a military trainer, that performance tests or assessments could never occur in school-like venues, but only in on-going job settings.
To further confuse the issue, the military and some business enterprises frequently use the term “assessment” to mean the evaluation of programs, policies, or interventions, as in technology assessment (Baker & O'Neil, 1994; O'Neil & Baker, 1994). So to assess training does not necessarily include or exclude the measurement of individual or team achievement or performance. It might mean review the status or content of a program. In addition, computer scientists have their own spin on these topics, with assessment and performance sometimes focusing on questions of preferences and performance as it refers to computer software systems rather than to individuals (O'Neil & Baker).
All of that said, it would be desirable to standardize language across groups, both to facilitate interactive communication and to allow appropriate inferences to be drawn from research and development in adjacent fields. The best we can do is to specify how we use terms here:
Formative evaluation is information obtained during the developmental stages of a product or system, used to revise the system with the intention of making it more effective and/or less costly. Minimally, formative evaluation should address interim and targeted learner outcomes.
Summative evaluation is comparative study, typically of contending programs, usually requiring strong research designs (experimental), criterion measures that span goals of contending interventions, verification of treatment or program implementation, and results used to make choices of programs for goals, groups, or settings. Cost is usually an important factor.
Performance refers to a product that is created or a process that is available for observation. Constructed response(s), usually multistep, are made by the learner.
Assessment is measuring through systematic approaches the achievement, affective states, or performance of individuals or groups.

EVALUATING WEB INTERVENTIONS: GOALS

What should evaluation of Web-based learning try to achieve? One clear directive is that it should authenticate claims that the provided interactions result in planned outcomes; that is, allegations that students learn something are supported.
Moreover, in considering different types of evaluation practice, we link them to the nine types of evaluation in Table 1.1.

Systematic Studies

The first class of studies are those that are tightly designed; they have identified goals, measures, and often instrumentation to gather process findings. For the most part, they are implemented as other evaluations, pretests, interim measures, posttests, measures of satisfaction [these correspond to Kirkpatrick's (1994) first two levels of evaluation—reaction vs. attitude and learning]. Web-based evaluation can make this a simpler task (for instance, learners can easily be placed in variations of treatment without much trouble, and their responses to exercises or tests unobtrusively tabulated). These evaluations, however, can be troublesome to administer, with some problem in finding comparable control groups (particularly for technically demanding tasks), problems of persistence, too few students to infer much about patterns of engagement, and so on. We have been successful in using temporary employment agencies to select groups to which we could administer, modify or withhold treatment. Such an approach is partially successful but never mirrors the exact characteristics of the desired learner (in a job setting or voluntarily taking a course—in fact, paying for it) Also, because using “temps” costs money, replicating efforts of an entire term's costs is enormously expensive. For that reason, many studies of courses—the first three types of Web-based learning—are more typically evaluated by having individuals work through components. The most frequent option is post hoc designs.

Post Hoc Evaluation

Evaluations conducted without much scientific flavor, and after the fact correspond to Cook and Campbell's (1979) most flawed design. People are asked how they reacted to course components and may be given a posttest related to information thought to be important. They may be followed up in the future to see on-the-job activities presumably influenced by the intervention. These approaches correspond vaguely to Kirkpatrick's (1994) formulation of evaluation. But they need three conditions to be met: (a) The comparison of observed performance has to be calibrated against something—weakest is prior performance of trainees, nonequivalent control groups come next, and a real randomized experiment (using the unit of randomization as the one in the analysis) would be helpful; (b) Clear statement of the intended outcomes (this is a hard one for many Web-based interventions); and (c) Developed measures of performance would count as making significant progress toward expertise envisioned.

Summative Approaches to Evaluation

Many Web-based evaluations are conducted in a post hoc manner, that is, a completed system or course is tested, sometimes in a comparative way, sometimes just as a simple post hoc study. Why? The system takes time and energy to be created. Bugs have to be discovered and fixed. The priority often is getting a course up and running by the time students are to be there. As a result, there may be little time for the niceties of good evaluation. Sometimes the course examination that has been used for non-Web-supported instruction becomes the examination for the posttest. The consequences of this decision reduce pressure on the evaluator and provide for a basis of comparison that is widely used (but deeply flawed)—that is, the comparison between Web-based and regular instruction. Most seriously, however, such examinations may miss ...

Table of contents