
eBook - ePub
Diagnostic Monitoring of Skill and Knowledge Acquisition
- 528 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
Diagnostic Monitoring of Skill and Knowledge Acquisition
About this book
An adjunct to the increased emphasis on developing students' critical thinking and higher order skills is the need for methods to monitor and evaluate these abilities. These papers provide insight into current techniques and examine possibilities for the future. The contributors to Diagnostic Monitoring of Skill and Knowledge Acquisition focus on two beliefs: that new kinds of tests and assessment methods are needed; and that instruction and learning can be improved by developing new assessment methods based on work in cognitive science.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Diagnostic Monitoring of Skill and Knowledge Acquisition by Norman Frederiksen, Robert Glaser, Alan Lesgold, Michael G. Shafto, Norman Frederiksen,Robert Glaser,Alan Lesgold,Michael G. Shafto in PDF and/or ePUB format, as well as other popular books in Pedagogía & Educación general. We have over one million books available in our catalogue for you to explore.
Information
1
Intelligent Tutors as Intelligent Testers
INTRODUCTION
In the early days of educational testing, tests were developed for the purpose of making quantitative assessments of individuals’ general levels of ability and achievement relative to others within a group. The use of such norm-referenced tests was principally for selecting students to enter an educational program or for assessing the outcomes of instruction. The instructional need for precise information about the nature of a student’s prior knowledge of a given domain and of its development over the course of learning was not addressed. Criterion-referenced testing was introduced with the goal of promoting individualized, adaptive instruction (Glaser, 1963). These tests were intended to give direct information about what particular knowledge and skills a student has attained, not normative assessments of a student’s standing within a skill domain. Viewed from a cognitive perspective, such tests were to provide knowledge of a student’s prior mental models, misconceptions, or problem solving skills. This information would have a great bearing on the kinds of problems that the student should be given to promote learning, as well as on the nature of the hints and explanations of problem solving that are likely to be most helpful in learning. By measuring students’ knowledge and skills as they are acquired during learning, more effective instructional manipulations could be introduced that would serve the needs of the individual student. This concept of a criterion-referenced test is a precursor to the idea of creating a student model within an intelligent tutoring system.
The great promise of criterion-referenced testing for creating an effective new form of individualized instruction that ensures mastery by all students has not materialized. To a large measure, this may be due to limitations in the technology of testing, that is, in the multiple choice, paper-and-pencil tests that were used to implement the idea. Tests using such a format can be developed relatively easily for purposes of assessing factual knowledge or the performance of a particular skill. However, use of such a testing method makes it very difficult to determine a student’s methods and strategies in solving a problem, or his/her mental models and misconceptions within a domain. Thus, developers and users of criterion-referenced tests have tended to focus on individual elements of knowledge and skill rather than on the process of problem solving, and this has led to the fractionation of curricula into collections of knowledge and skill elements. Although each of these individual elements was testable, the higher-order integration of those skill and knowledge elements in mental models for a domain and in strategies for solving problems was not properly addressed using the technology of paper-and-pencil tests (Ward, Frederiksen, & Carlson, 1980). Because these higher level problem solving skills remained unassessed, educational systems were not driven to improve instruction in those skills and strategies (N. Frederiksen, 1984).
The advent of intelligent learning environments in which students are actively engaged in the process of problem solving presents an opportunity for revolutionary changes in the way in which students’ competence can be assessed. Within such environments, students interact with a system that simulates real-world problems. Their mode of reasoning is more generative than evaluative. They plan and carry out strategies for solving problems, rather than working backwards from multiple-choice response alternatives. All aspects of their performance are available for measurement purposes, ranging from records of the problems they have solved to inferences about their actual problem-solving processes (based on their solution methods and their past performance). Some developers of intelligent tutoring systems have been so bold as to describe their assessments of the individual as “student models” (Clancey, 1983), or formal representations of the students’ declarative and procedural knowledge. These new possibilities for assessment in the course of instruction are being developed by individuals whose primary interest is in learning and instruction, rather than by psychometricians. As in criterion-referenced testing, the goal is the development of effective, individualized instructional systems.
It is important at this early stage of the enterprise to examine the role of assessment within instruction, and to identify (a) those aspects of a student’s problem-solving expertise that are capable of measurement (representation) within the framework of a tutoring system, and (b) the set of those measurable aspects that are worthwhile to measure, as viewed from the perspective of current theories of optimal instruction. Since computer-based learning environments can support a variety of learning strategies and types of explanations, it is also important to assess (c) the potential for measurement of a student’s preference learning strategies and his or her rate of learning when a particular strategy is employed.
In this chapter we begin with a characterization of a theory of optimal instruction and the constraints it imposes on the design of intelligent learning environments. These are illustrated by describing one such environment, which incorporates many of the features of an optimal instructional environment. We then examine the potential for measurement within such an environment and the instructional purposes to be served by developing such representations of a student’s knowledge. We also attempt to map measurement concepts represented within the tutoring system with those of traditional test theory as a way of characterizing the important distinctions we are trying to make. Finally, we consider another form of measurement made possible by tutoring systems, and that is the student’s rate of knowledge acquisition within a tutoring system configured to represent a particular learning strategy. Such systems, we argue, may allow an assessment of the particular learning strategies that are most effective or that are preferred by a student. Use of tutoring systems as testers may in this way change fundamentally the kind of knowledge of a student that is the goal of measurement.
A PLAN FOR AN INTELLIGENT TUTORING SYSTEM
We begin by characterizing some features of an effective instructional system as we (White & Frederiksen, 1986a, 1986b, 1987) and other (e.g., Anderson, Boyle, Farrell, & Reiser, 1984; Collins, Brown, & Newman, 1989) view them. The domain of instruction we have in mind is learning to reason about the behavior of a complex physical system and to solve problems involving that system. In the tutoring system we present, the physical domain is that of electrical circuits, and the problem solving is that involved in predicting the behavior of a circuit, designing and modifying a circuit, and troubleshooting. However, the instructional principles are quite general, applicable to domains as diverse as reading, writing, and mathematics, as well as to physics (see Collins et al., in press, for examples).
1. Instruction Should be Problem Centered. Learning should occur within a problem-solving context. If the student is to acquire knowledge that is not inert but is useful in solving problems, he or she must practice the cognitive processing involved in applying that knowledge in solving problems. This principle is based on one of the oldest maxims of learning theory: You cannot learn a behavior if you do not exercise it (Thorndike, 1898, 1932). In this case, the “behavior” refers to the application of knowledge in the course of solving problems. Learning should therefore be situated in a problem-solving context, in order to engage the desired cognitive processes during learning and to motivate the acquisition of problem-solving strategies and models.
2. Learning Involves Successive Approximations to the Target Mental Representation. This principle is also based on a very old idea in learning theory: that complex behaviors can be acquired through the learning of a series of successive approximations to the desired final behavior. In the present context, the “behavior” is again a target mental representation together with strategies and techniques for applying such knowledge in problem solving. The successive approximations constitute an evolutionary progression of mental models, each of which builds on prior models, adding or modifying earlier representations until a target cognitive structure has been achieved. The process of learning is therefore one of model transformation, whereby the characteristics of problems and coordinated explanations of a tutor facilitate the modification of a prior model by the student and the synthesis of new concepts with prior mental representations.
3. Explanations Should be Process Centered. Explanations should be centered on modelling the reasoning involved in actually solving problems. They should relate the cognitive models that are being taught to (a) prior models for reasoning about the domain developed by the student in the course of learning, and (b) the procedures and strategies needed to use the cognitive models in solving problems within the domain. They thus should facilitate the model transformation to be developed by the student, as well as the application of the model in solving problems.
4. Instruction Should Employ Cognitively Focused Feedback. Students should receive feedback concerning not only the correctness of their problem solving, but also concerning the appropriateness of different steps in their solution, from the perspective of the cognitive models and strategies being taught. Such feedback, however, should not necessarily be immediate and intrude into their ongoing problem solving. In “real-time” tasks, for example, Munro, Fehling, and Towne (1987) have found that immediate feedback can be detrimental to learning. Problem solving may be similar. An alternative is to provide a basis for the student to compare his or her problem solving with that of an “expert” (as a coach might do in an post-game analysis). The comparison of problem-solving methods could be left up to the student, or an explicit analysis could be attempted by the tutor. On the other hand, if a student gets stuck and is seen to be “thrashing around” or if the student desires it, immediate feedback may be in order. The student could be coached in such instances by being given some hint as to the strategy that may be appropriate or the principle that is applicable, or the student could be shown some part of the problem solution and then be given an opportunity to complete the problem.
5. Problem Sequencing Should be Performance Based. In a problem-centered learning environment, problem sequences should be based on the mental models and problem-solving strategies required for their solution, and on knowledge of the student’s understanding of those models and strategies. The introduction of new problems should support the goals of (a) learning new model transformations, or (b) providing an opportunity to practice applying earlier acquired models in new problem situations. Each of these goals can be best met if the new problems do not require large scale model transformations, but rather incremental changes to an evolving mental model for the domain.
6. Motivation is Primarily Derived From Success in Solving Problems. In a problem-centered learning environment, students are actively engaged in carrying out the intellectual tasks posed by problems, and their motivation for engaging in learning will be intrinsic to their succeeding in the problem-solving enterprise. To capture this motivation, students should have knowledge of the progression of models and problem-solving strategies they are to master and the variety of problem types they will solve. They can then directly interpret their rate of progress within the domain. Expert modeling of problem solving and/or coaching should be employed to ensure success for any student who actively pursues learning within the problem-solving environment.
7. Multiple Learning Strategies Should be Supported. It should not be assumed that all students learn best using the same pedagogical technique (Cronbach & Snow, 1977). Some students may prefer to have solution strategies for a new class of problems modeled for them before they attempt such problems, whereas others may prefer to induce for themselves the new ideas required for solving a new class of problems. Learning environments should therefore provide for the preferences of individual learners.
8. Reification. Learning of cognitive skills such as those involved in problem solving may be enhanced if the cognitive contents of learning are made explicit and represented linguistically and graphically by the tutor in modeling problem solving, in giving explanations, and in providing hints or coaching (Anderson, Boyle, & Reiser, 1985; Brown, 1985; Collins & Brown, 1989). This allows students to develop an understanding of the nature of the cognitive skills that they are learning, and will encourage them to reflect on their own problem-solving processes.
In the next section we describe a tutoring system that incorporates many of these instructional features. With this system as an example, we examine the potential for measurement of cognitive models and problem-solving strategies within such a system, and highlight the instructional uses of such measurement during tutoring.
Example of a Problem-Based Tutoring System
We have developed a tutoring system called QUEST (Qualitative Understanding of Electrical System Troubleshooting; White & Frederiksen, 1986a, 1986b, 1987) that teaches qualitative models for basic electricity and strategies for solving electrical troubleshooting problems. In this system, we attempted to incorporate many of the features of the optimal problem-based tutoring system just described. QUEST provides an environment in which students can learn basic concepts of electricity along with strategies for solving circuit problems, such as predicting circuit behavior and troubleshooting. The tutoring system combines features of a microworld that provides an interactive simulation of circuit behavior, and a coaching expert that can model how to reason about circuit behavior and can demonstrate troubleshooting strategies. Within this environment, students can construct and modify circuits, and they can attempt to solve problems. The tutoring system is capable of simulating and explaining the behavior of circuits and also of demonstrating how to solve circuit problems, either as an explanation prior to the student’s attempting the problem or as feedback following the student’s attempted solution of the problem.
Cognitive Modeling. To provide such an explanatory capability, the simulation system built into the tutor incorporates a qualitative, causal model for reasoning about the behavior of electorial circuits, rather than a quantitative model. The qualitative model forms the basis for representing the behavior of circuits within the tutoring system and at the same time provides explanations of how the students should reason about their behavior. This model is a cognitive model derived from studies we have carried out of the problem solving of an expert teacher and troubleshooter (White & Frederiksen, 1986a). As such, it employs the same kind of reasoning that the student is to develop for reasoning about a circuit. Thus, the actual behavior of the simulation, when illustrated using computer graphics and articulated through a speech interface, provides an explanation to the student of how to reason about a circuit. For example, if the student constructs a circuit from a set of elementary objects such as resistere, light bulbs, switches, and batteries, the simulation system can model the behavior of that circuit and explain its functioning in qualitative terms.
Another important feature of the tutoring system is that the cognitive simulation model (as well as the troubleshooting strategy) is not a static, single model for the behavior of circuits, but actually incorporates a set of upwardly compatible models that vary in their complexity. Initially, the models are very simple and only know about simple aspects of electrical theory or simple troubleshooting techniques. These early models are adequate for correctly simulating the behavior of only a limited number of circuits and giving explanations of circuit behavior that are consistent with the model at that level. We motivate transitions to more complex models by choosing problems that (a) cannot be solved correctly by the prior model, and (b) require the new concept or method of reasoning that is incorporated in the more elaborated model. The student’s task is to transform his or her current model into a more elaborate model that incorporates the new features needed to solve the new set of circuit problems. Together, the set of models through which the learner may pass forms a space of models, and the progression of models mastered by the student forms a trajectory through that space similar to Goldstein’s (1982) genetic graph.
Within this framework, the tutoring task can be viewed as one of facilitating the student’s model transformations through the choice of problems and the generation of appropriate explanations. The models in the progression are designed to facilitate this evolutionary process of model construction. One can think of the student’s task of learning a model of circuit operation as analogous (initially) to developing a piece of computer code, or (later on) to modifying prior code in order to incorporate features of the next model in the progression. Therefore, we have sought to design the progression of models so that at each stage the models are easily modifiable. Computer science has given us some principles to use in specifying such models. One is inheritance: For example, with...
Table of contents
- Cover
- Halftitle
- Title
- Copyright
- Content
- Introduction
- 1 Intelligent Tutors as Intelligent Testers
- 2 Analysis of Student Performance with the LISP Tutor
- 3 The Role of Cognitive Simulation Models in the Development of Advanced Training and Testing Systems
- 4 Reformulating Testing to Measure Learning and Thinking Comments on Chapters 1, 2, and 3
- 5 Evidence from Internal Medicine Teaching Rounds of the Multiple Roles of Diagnosis in the Transmission and Testing of Medical Expertise
- 6 Diagnosing Individual Differences in Strategy Choice Procedures
- 7 Guided Learning and Transfer: Implications for Approaches to Assessment
- 8 The Assisted Learning of Strategic Skills Comments on Chapters 5, 6, and 7
- 9 Parsimonious Covering Theory in Cognitive Diagnosis and Adaptive Instruction
- 10 Rules and Principles in Cognitive Diagnosis
- 11 Trace Analysis and Spatial Reasoning: An Example of Intensive Cognitive Diagnosis and Its Implications for Testing
- 12 Assessment Procedures for Predicting and Optimizing Skill Acquisition After Extensive Practice
- 13 Applying Cognitive Task Analysis and Research Methods to Assessment
- 14 Monitoring Cognitive Processing in Semantically Complex Domains
- 15 Diagnostic Approaches to Learning: Measuring What, How, and How Much Comments on Chapters 12, 13, and 14
- 16 Diagnostic Testing by Measuring Learning Processes: Psychometric Considerations for Dynamic Testing
- 17 Generating Good Items for Diagnostic Tests
- 18 Toward an Integration of Item-Response Theory and Cognitive Error Diagnosis
- 19 Diagnostic Testing Comments on Chapters 16, 17, and 18
- Author Index
- Subject Index