Deep Comprehension
eBook - ePub

Deep Comprehension

Multi-Disciplinary Approaches to Understanding, Enhancing, and Measuring Comprehension

Keith K. Millis, Debra Long, Joseph Magliano, Katja Wiemer, Keith K. Millis, Debra Long, Joseph Magliano, Katja Wiemer

Share book
  1. 282 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Deep Comprehension

Multi-Disciplinary Approaches to Understanding, Enhancing, and Measuring Comprehension

Keith K. Millis, Debra Long, Joseph Magliano, Katja Wiemer, Keith K. Millis, Debra Long, Joseph Magliano, Katja Wiemer

Book details
Book preview
Table of contents
Citations

About This Book

This volume provides an overview of research from the learning sciences into understanding, enhancing, and measuring "deep comprehension" from a psychological, educational, and psychometric perspective. It describes the characteristics of deep comprehension, what techniques may be used to improve it, and how deep levels of comprehension may be distinguished from shallow ones. It includes research on personal-level variables; how intelligent tutors promote comprehension; and the latest developments in psychometrics. The volume will be of interest to senior undergraduate and graduate students of cognitive psychology, learning, cognition and instruction, and educational technology.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Deep Comprehension an online PDF/ePUB?
Yes, you can access Deep Comprehension by Keith K. Millis, Debra Long, Joseph Magliano, Katja Wiemer, Keith K. Millis, Debra Long, Joseph Magliano, Katja Wiemer in PDF and/or ePUB format, as well as other popular books in Psychology & Cognitive Psychology & Cognition. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2018
ISBN
9781351613262
Edition
1

Part I

Understanding Deep Comprehension

1
Prose Comprehension Beyond the Page

Jennifer Wiley and Tricia A. Guerrero

A Brief Reflection of Emergence of Research on Discourse Comprehension

Over 35 years ago, Prose Comprehension Beyond the Word was published which explored the processes involved in constructing meaning across units of text that extended beyond single words and sentences (Graesser, 1981). A brief glimpse at some of Graesser’s most impactful publications since that first monograph helps to illustrate how the field has moved toward the study of comprehension beyond the page by exploring how readers construct inferences in larger prose and discourse contexts (Graesser, Singer, & Trabasso, 1994), studying how learners interact with tutors and intelligent tutors (Franklin & Graesser, 1996; Graesser & Person, 1994), and promoting the development of tools (including Coh-Metrix) that can support automatic detection of readers’ comprehension and emotional states (Graesser, McNamara, Louwerse, & Cai, 2004). As a young graduate student starting in the lab of James F. Voss, the first author read the 1981 book as she began her first research projects on the effects of prior knowledge and attitudes on text processing. Rereading it again recently along with the second author, it is clear that this work had a formative influence.
One salient theme that emerges from the opening chapter is the need for a new generation of research that considers how the meaning of prose (rather than word or sentence-level meaning) is constructed. The work presented in this early volume heralds the burgeoning of research exploring comprehension at a discourse level, with contemporary work from the expertise tradition on the effects of domain knowledge on prose comprehension (Chiesi, Spilich, & Voss, 1979; Spilich, Vesonder, Chiesi, & Voss, 1979), from the schema tradition with work using larger units of text (Anderson, Reynolds, Schallert, & Goetz, 1977), and from Garrod and Sanford who published a book in the same year (1981) with a very similar name, Understanding Written Language: Explorations of Comprehension Beyond the Sentence, which emphasized the need to take the context of language into account.
A second salient theme that immediately appeals to an inquisitive researcher’s curiosity is the stark discrepancy painted throughout the monograph between the comprehension of narrative and expository prose. Readers prefer narratives, remember them better, and generate more inferences from them than from expository texts. These findings are at once intriguing and highly problematic for researchers who are interested in educational applications. In many educational contexts, students are routinely expected to gain an understanding of new concepts from expository texts, particularly in science. What is so different and difficult about comprehension of expository text? This is a central question that remains to this day. A later volume on The Psychology of Science Text Comprehension edited by Graesser and colleagues (Otero, Leon, & Graesser, 2002) provided some updates on this line of questioning, but there is still much more yet to discover about how and when learners can learn effectively from reading, and why they often fail to develop an understanding of new ideas from studying expository text.
There are surely many reasons why expository texts may cause comprehension difficulties. Texts vary in their structure and their complexity. And, readers vary in their familiarity with the subject matter, and whether they possess the prior knowledge needed to understand it or the working memory capacity that may be required to process the information. Narratives are generally about people and their actions and motivations over time. As readers, we generally have an abundance of prior knowledge about such things. In contrast, when we are given expository texts to read, it is often to teach us about phenomena, processes, or systems that we do not understand yet. This fundamental difference may be critical, or it may be just one of many reasons why comprehension from expository text is so challenging.
Answering the question of why expository text is so hard to comprehend, and under which contexts or conditions learning from text may be most successful, has been the motivation behind many studies done by Wiley and her colleagues. The general approach has been to identify when students experience obstacles to learning from expository texts; exploring which individual differences or conditions may relate to better or poorer performance; and testing hypothetical mechanisms by manipulating the amount of support that students receive through instructional prompts, learning activities, or instructional adjuncts such as animations and analogies. The ultimate goal of this work is to understand what features are critical for the design of effective instructional contexts so that students may acquire a deep understanding of subject matter as they read.

How Do We Distinguish “Deep” From “Shallow” Comprehension In Our Research, Both Theoretically and Empirically?

A key feature of the research that has been done by Wiley and her colleagues that resonates with Graesser’s work is the critical distinction between learning that is based in memory versus understanding of text, or alternatively between “shallow” and “deep” comprehension processes. Otero et al. (2002, p. 6) provide this helpful characterization:
Shallow knowledge consists of explicitly mentioned ideas in a text that refers to: lists of concepts, a handful of simple facts or properties of each concept, simple definitions of key terms, and major steps in a procedure… . Deep knowledge consists of coherent explanations of the material that fortify the learner for generating inferences, solving problems, making decisions, integrating ideas, synthesizing new ideas, decomposing ideas into subparts, forecasting future occurrences in a system, and applying knowledge to practical situations.
This distinction between shallow and deep comprehension is key when studying learning from expository science texts. The goal for reading is not for students to merely have a superficial representation of the exact words or sentences that were read. Rather, the goal is for them to develop a coherent situation model (Kintsch, 1994), causal model (Graesser & Bertus, 1998; Millis & Graesser, 1994; Singer, Harkness, & Stewart, 1997; Wiley & Myers, 2003), or mental model (Mayer, 1989) of a system or phenomenon. This generally requires generating connections not explicitly mentioned in the text, by integrating information across different parts of text or with prior knowledge. It can also involve reasoning through ideas stated in the text to arrive at implications or consequences.

Different Conceptions of Comprehension

The theoretical differences between these two types of learning outcomes (memory and understanding, or shallow vs. deep comprehension) lead to differences in measures of comprehension. Memory for a text requires shallow, surface-level processing. Typical memory-based questions will prompt the reader to recall ideas mentioned directly in the text such as facts or definitions, or to interpret the meaning of particular words or phrases. On the other hand, understanding of text requires a deeper level of processing. Questions that test for understanding often require the reader to think about “how” and “why,” and when given after reading, such questions can assess “deep learning” or “deep comprehension” of a system, process, phenomenon, or concept.
There are many ways of assessing comprehension of text, and test questions vary in the “depth” of the comprehension that is required. A consideration of two standardized comprehension tests (Nelson-Denny, ND, and Gates-MacGinitie, GM, reading tests) helps to provide examples on a continuum of possible question types. The college-level forms of the ND (Forms G and H) contain both narrative and expository passages that are about 200 words in length. The scoring manual categorizes questions into two categories: literal and interpretive. The literal items are “text-based” questions that largely assess surface-level features of the text. A large proportion of these questions can be found verbatim in the text, and answered by using lexical overlap or simple search. Interpretive questions usually require a reader to assess the author’s point of view or the appropriate audience for the text. Because of the explicit relations between the questions and the texts, along with the minimal inferencing needed for items of either type, the ND can be categorized as requiring “shallow” comprehension abilities to arrive at correct responses.
The highest-level version (10/12) of the GM also contains both narrative and expository passages. As shown in Table 1.1, the GM expository passages tend to be shorter and more variable in length than the ND, yet comparable in difficulty. Although not defined by the GM, questions can also be categorized similarly to the ND. Literal questions require shallow processing, although they have more syntactic and lexical variability. They also require slightly more interpretation than the ND because the distractor choices require that the reader develops a basic understanding of the passage in order to select the correct answer, as opposed to conducting a verbatim search for words associated with the question. The non-literal questions are variable in the level of processing required. Some are lower-level inference questions that require minor connective or bridging inferences across sentences to arrive at the answer. There are also higher-level inference questions that require the reader to reason with information from the text, integrate it with prior knowledge, or apply it in new contexts.
How do we measure “deep” comprehension? As shown in Table 1.1, whereas the standardized tests provide strong coverage in “shallow” comprehension processes, they provide less coverage of “deep” comprehension processes. Few questions require the reader to use the information that they read as part of a reasoning process or as applied to a new situation. However, there are some standardized tests that do have more items that require readers to think about inferences or implications that follow from the text including both the ACT Reading Comprehension subtest and the MCAT Critical Reasoning section. For example, the ACT Reading Comprehension subtest includes questions that ask readers to use reasoning skills to understand sequences of events; make comparisons; comprehend cause-effect relationships; and draw generalizations. The passages are representative of the kinds of text commonly encountered in first-year college curricula. And, there is now even a section that requires reasoning across multiple texts.
Table 1.1 Descriptive Statistics for Features of Different Comprehension Tests
Nelson-Denny
Gates-MacGinitie
Griffin, Wiley & Thiede (in press)
FKGL (Texts)
11.7 (3.2)
12.1 (2.0)
11.16 (1.24)
FRES (Texts)
44.6 (16.0)
48.7 (9.4)
48.02 (7.54)
Word count (Texts)
203.4 (22.8)
123.9 (43.0)
799 (198)
Text-based test items
63.16%
22.92%
50%
Low-level inference items
36.84%
66.67%
25%
High-level inference items
0%
10.42%
25%
Note: FKGL = Flesch-Kincaid Grade Level, FRES = Flesch Reading Ease Score
The materials from Griffin, Wiley, and Thiede (in press) (and Thiede, Wiley, & Griffin, 2011) serve as examples of attempts to assess comprehension at both surface and deep levels. These materials include a set of six expository science texts that describe phenomena, and support construction of a causal understanding of the process or system (see Wiley, Griffin, & Thiede, 2005 and 2008 for more discussion). In contrast to memory-based test items, the inference questions used by Wiley and her colleagues primarily test for understanding of a causal or mental model of phenomena. These questions ask readers to apply knowledge they gained from the text to new situations, analyze causal relationships and relationships between ideas and implications, and predict possible consequences when factors are changed. In order to support the construction of a mental model from the text, and to allow for multiple different test questions to be created, the texts are typically much longer than ND and GM passages, while being comparable in difficulty as shown in Table 1.1. A further contrast with traditional assessments regards whether texts are available during testing. While reading comprehension assessments traditionally allow readers to answer with the texts available, when the goal of research is to measure what a student has learned from a passage, then it becomes important to test what understanding remains after the text is no longer present.
A final point is that the nature of questions that can be asked on a comprehension test is inextricably linked to the quality of the text that the reader is asked to read. If the text does not include the information needed to construct a causal model of a phenomenon, then it makes no sense to include comprehension items based on causal inferences. The expository texts that are provided to learners need to be carefully written. Test items also need to be carefully created so that they require that students have constructed a coherent situation-model representation of the text in order to answer questions correctly. To ensure test items are measuring understanding, they cannot be answerable simply by using a surface-level representation of the text. To ensure that readers need to engage in active comprehension, the texts also cannot be fully explicit. That is, some key connections needed to be inferred by the reader. If the texts are fully explicit with respect to a causal model, then students could potentially answer inference questions by using verbatim memory for the text.
To explore differences between memory for text and understanding (or shallow and deep comprehension), two distinct sets of test items (memory and inference) were developed for these six texts. The distinction between these two types of test items can be empirically demonstrated in a number of ways. For example, in one study we found that undergraduates are able to reliably differentiate between memory and inference questions by recognizing when answers could be “found directly in the text” (memory questions) as opposed to answers that could not, and need to be inferred from the text (inference questions). A different study asked participants to answer both types of test questions with the texts present. Performance on the memory questions in this condition was at ceiling, whereas performance on the inference questions did not improve. However, a third study found that performance on the inference questions did improve when readers were prompted to generate an explanation of how or why a phenomenon is occurring before taking the tests (Guerrero & Wiley, 2018).
Further results discussed later provide additional support for the distinction between the two types of test questions, as several manipulations have produced dissociable effects. One line of work explores conditions that underlie successful understanding of (as opposed to memory for) explanatory, expository science texts. Two other lines of work extend in two different directions: across multiple documents (when readers attempt to acquire new understanding from multiple sources), and into the reader’s mind (when readers need to assess their levels of comprehension). All three areas of work show the critical importance of the reader adopting an appropriate task or activity model that supports readin...

Table of contents