
- 304 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
Models Of Short-Term Memory
About this book
This volume offers a collection of theoretical perspectives in the area of short-term memory. It contains overviews of models of short-term memory, with particular emphasis placed on the detailed description of the functioning of the models. The volume represents both computational approaches and theories expressed in more traditional verbal form. Models represented in the volume also cover both developmental and neuropsychological perspectives on short-term memory.; This book should appeal to active researchers in the area of memory, to graduate students, and to academics who wish to update their knowledge of this fast- developing are of research and theory. Final year undergraduates may also find this book of interest.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weāve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere ā even offline. Perfect for commutes or when youāre on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Models Of Short-Term Memory by Susan E. Gathercole in PDF and/or ePUB format, as well as other popular books in Psychology & Cognitive Psychology & Cognition. We have over one million books available in our catalogue for you to explore.
Information
1
The Concept of Working Memory
Medical Research Council Applied Psychology Unit, Cambridge, UK*
Introduction
In the 1880s Joseph Jacobs, a London schoolmaster with an interest in the new science of psychology wanted to have a measure of the individual differences among the mental capacities of his pupils. He devised a test in which the subject was presented with a string of numbers, and attempted to repeat them back verbatim. If correct, the string was increased to a point at which errors began to occur; the subjectās digit span. The digit span paradigm has continued to be important, both practically, as it forms a subtest of the WAIS. probably the most widely used measure of adult intelligence (Wechsler, 1955), and theoretically, where the technique continues to play an important role in contemporary theorising about short-term or working memory (Baddeley. 1986). The term working memory refers to the system or systems involved in the temporary storage of information in the performance of such cognitive skills as reasoning, learning, and comprehension. It has evolved from the earlier concept of short-term memory, and can probably best be understood in the context of its development over the last 30ā40 years. Before going on to examine this however, it is important to point out that the term āworking memoryā is also used in other ways. These will be described briefly before we consider the early development of the concepts of short-term and working memory.
In research on learning in animals, the term working memory is most closely associated with the work of Olton, Walker, and Gage (1978), and in particular with work involving the radial maze. This piece of apparatus involves a series of runways each of which radiates out from a central choice point. In a typical experiment, all the arms are baited with food, and the animal placed in the centre. The optimal strategy is to take the food from each limb of the maze, and then not return to that limb again. In order to perform this task, the animal, typically a rat, must remember which arms have already been visited. Because the animal is likely to be tested repeatedly on the same maze, it is important to base a given dayās judgements on performance on that particular day. The term working memory is used by Olton to refer to the system responsible for holding this information for that particular day, hence allowing the animal to perform effectively. Although this has proved to be a very valuable paradigm for studying animal learning and memory, it is not at all clear how the paradigm maps onto the working memory research that forms the bulk of the present chapter; indeed attempts to develop an analogue for use with human subjects suggest that there may be a substantial long-term rather than working memory contribution to the way in which humans perform the task (R.Morris personal communication).
A second use of the term working memory is concerned with the computer-based modelling of cognition, based on the production system approach initially developed by Newell and Simon (1972). For example Andersonās (1983) model of cognition ACT*, as with all production system models, assumes a working memory system within which the production systems operate. Such an assumption appears to be essential to this approach to modelling. However, such models do not necessarily make any attempt to identify and study an equivalent working memory system in humans. In the case of Andersonās model for example, the capacity of working memory does not appear to be limited in any important way, whereas the concept of limited capacity lies at the heart of empirical research on short-term and working memory.
It is important to note however, that other cognitive models based on a production system architecture do make assumptions about working memory and its limitations. A good example of this is the model of comprehension developed by Kintsch and van Dijk (1978) which assumes that comprehension capacity is constrained by the assumption that working memory can hold only a limited number of propositions, and that this varies from one individual to the next. Kintsch and Vipond (1979) present an intriguing demonstration of this aspect of their model in a study contrasting the speeches of Eisenhower and Stevenson in their respective Presidential campaigns. Given a large working memory, the two sets of speeches are found to be equally ācomprehensibleā by the model, whereas when the working memory capacity of the model was reduced, then it began to have difficulty in coping with Stevensonās speeches. Even here, however, the model assumes a limited-capacity working memory as part of a model of language comprehension, rather than being directly concerned with investigating the nature of the working memory system that is assumed.
Long- and Short-Term Memory
Despite the continued psychometric use of digit span, short-term memory was comparatively neglected until the late 1950s when interest in the temporary storage of information developed as a result of the practical need to study such real-world tasks as air traffic control or telephony as part of the application of experimental psychology to military performance during World War II. This coincided with the development of the information processing approach to human cognition as reflected in Broadbentās seminal book Perception and communication (Broadbent, 1958), and subsequently by Neisserās equally influential text, Cognitive psychology (Neisser, 1967). Broadbent suggested that it was necessary to assume two kinds of memory: a short-term system in which items were held in a temporary buffer from which the memory trace would fade spontaneously unless revived by rehearsal; and long-term memory, where forgetting was assumed to occur as a result of mutual interference between long-term memory traces. This distinction appeared to be supported by the demonstration by Brown (1958) in Britain and Peterson and Peterson (1959) in the US that small amounts of material, well within the memory span, would be forgotten within seconds unless actively rehearsed.
During the 1960s, controversy raged as to whether it was or was not necessary to assume more than one kind of memory. Melton (1963) showed that traditional short-term memory tasks such as digit span were capable of reflecting long-term learning. If, for example, a given sequence were surreptitiously repeated from time to time, then the probability of its being correctly repeated would increase (Hebb, 1961). He concluded on the basis of this and similar evidence that it was unnecessary to assume two memory systems. Short-term memory (STM) was simply a weaker version of long-term memory (LTM). In both cases, he proposed that forgetting was the result of interference, citing cogent evidence from Keppel and Underwood (1962) in support of the view that the rapid forgetting observed by Brown and the Petersons was the result of proactive interferenceādisruption of memory by early itemsārather than resulting from trace decay.
By the mid 1960s however, the evidence seemed to be favouring a dichotomous view. Waugh and Norman (1965) pointed out the need to distinguish between a hypothetical short-term memory store, which they labeled primary memory, and the experimental STM paradigm that was assumed to reflect it. As they point out, there is no reason to assume that any given experimental paradigm is a pure measure of anything, hence the demonstration that digit span may have components of both long- and short-term memory is unproblematic. They agree with Melton in suggesting that short-term forgetting is not due to trace decay, but rather than the traditional associationist interference theory interpretation offered by Melton, they prefer a model that assumes that short-term forgetting occurs because of the limited capacity of STM; as new items enter, old ones are displaced.
During the next few years, a number of new sources of evidence in favour of a dichotomous view appeared, among the most cogent were:
Two-component Tasks.Certain experimental tasks appear to have two quite separable components, one brief and the other durable. One example is the task of free recall; subjects are presented with a list of words and attempt to recall them in any order, typically resulting in a relatively low level of recall for earlier and middle-order words, together with excellent recall of the last few items presented. If recall is delayed by a distracting task however, the good recall of the last items, the so-called recency effect, is disrupted, whereas recall of earlier items remains at approximately the same level (Glanzer & Cunitz, 1966). In general, recall of the earlier items is dependent on a wide range of variables that are known to influence long-term learning, such as rate of presentation, word frequency, and imageability, whereas the recency effect is sensitive to delay but is unaffected by these variables. A simple interpretation is that earlier items are recalled from LTM, whereas the more recent items reside in a temporary short-term store (Glanzer, 1972).
Acoustic and Semantic Coding.Immediate memory span for verbal materials suggested that memory was based on the sound of the material rather than its meaning; errors tended to be phonologically similar to the item they were replacing (e.g. V substituting for B), even though presentation was visual (Conrad, 1964). Furthermore, sequences that are similar in sound, whether comprising letters (e.g. P D C V G), or words (man, cat, cap, mat, can), lead to poorer immediate serial recall than dissimilar sequences (K Q W L Y, or pit, day, cow, pen, tub) (Baddeley, 1966a; Conrad & Hull, 1964). In contrast, the long-term serial learning of word lists tends to rely on meaning, to be disrupted by semantic similarity, and to be unaffected by similarity of sound (Baddeley, 1966b). Sachs (1967) showed that a similar phenomenon occurs in remembering prose passages, when subjects are required to decide whether a sentence is an exact repetition of an earlier section of the passage. Changes to the surface structure of the passage which maintain meaning are readily detected only when tested virtually immediately, whereas semantic changes are detected after substantial delays. Finally, Kintsch and Buschke (1969), using a two-component probed memory task, demonstrated that the earlier LTM component of the task was sensitive to semantic coding, whereas the recency component was susceptible to similarity of sound.
Neuropsychological Evidence.Perhaps the most powerful evidence for a distinction between long- and short-term memory however, came from brain-damaged patients. It had been known for many years that patients suffering from the classic amnesic syndrome typically have preserved digit span, despite their substantial impairment in long-term learning and memory capacity. This point was demonstrated particularly cogently by the classic amnesic patient, HM, who became amnesic following bilateral hippocampal and temporal lobe excision carried out to relieve an intractable epilepsy (Milner, 1966). Subsequent research showed the pattern of deficits predicted by a dichotomous view of memory systems. Two-component tasks such as free recall showed the recency component to be preserved, while the long-term component was grossly impaired (Baddeley & Warrington, 1970). Performance on the Peterson task is also preserved in amnesic patients, provided their deficit is relatively pure, with no impairment in the executive functions that are typically mediated by the frontal lobes (Baddeley & Warrington, 1970; Warrington, 1982). In short, patients suffering from a pure and classic amnesic syndrome appeared to have a deficit in LTM, but preserved STM.
At the same time, patients with the opposite pattern of deficits were reported by Shallice and Warrington (1970). One such patient, KF, had normal long-term learning ability, coupled with a digit span of only two items, and markedly impaired recency in free recall, together with very poor performance on the Peterson short-term forgetting task. Such patients were assumed to have a specific deficit in STM.
The Modal Model
By the late 1960s, evidence seemed to be accumulating for a division of memory into three subsystems: (1) sensory memory, a series of brief sensory buffers lasting for less than a second, and feeding into (2) short-term or primary memory, which in turn fed (3) long-term memory. There were many such models, all approximating more or less closely to that of Atkinson and Shiffrin (1968), which for that reason has been termed the āmodal modelā.
However, despite its apparent success in accounting for a wide range of data, by the early 1970s the modal model was itself running into problems. These centred on the learning assumption made by the model, namely that long-term learning involved transfer from the short-term store, and that the longer an item resided in the short-term store, the greater its probability of being learned. A series of experiments that required subjects to maintain items in short-term storage by rote rehearsal failed to find the predicted relationship between time in store and long-term learning (e.g. Craik & Watkins, 1973).
Such studies led Craik and Lockhart (1972) to re-interpret the data in terms of levels of processing. This view argues that the probability of an item being learned increases as it is processed at progressively deeper and more elaborate levels. Hence, given a printed word such as dog, the requirement to make a superficial judgement of the typeface will lead to rather poor long-term learning; a slightly ādeeperā judgement as to whether it rhymes with the word log or not, will lead to somewhat better recall, whereas even better learning results from a deeper judgement based, for instance, on its semantic characteristics (e.g. Would the word ālogā make sense as a completion of the sentence āThe lumberjack chopped the ā¦?ā) (Craik & Lockhart, 1972; Craik & Tulving, 1975).
The levels of processing framework was extremely influential during the 1970s, and absorbed much of the research effort that had previously been directed to understanding short-term memory. Indeed, in some cases it was argued that it obviated the need for a concept such as short-term memory, although Craik and Lockhart themselves continued to assume a short-term or primary memory system that plays an important part in the process of encoding and recoding. Although the theoretical power of the framework has been questioned (e.g. Baddeley, 1978), it remains a useful broad framework that ties together a good deal of evidence on the relationship between coding and long-term memory.
A second source of problems for the modal model came from a closer examination of the data from patients with STM deficits. If, as the model proposes, good short-term storage is necessary for long-term learning, then STM patients should also have LTM deficits. This did not appear to be the case, whether performance was measured in terms of the long-term component of free recall, standard clinical memory tasks, or everyday cognitive performance, on which STM patients often appear to be remarkably normal. Such patients clearly presented a challenge to the dominant view that STM acts as a working memory that is necessary for the performance of a wide range of other cognitive tasks. For that reason, a colleague Graham Hitch and I decided to study the working memory hypothesis in more detail, not through the examination of STM patients, as such patients are rare and were not available to us, but rather by an attempt to simulate STM deficit in normal subjects.
Working Memory
The section that follows will be concerned with the general concept of working memory, and attempts that have been made to investigate its usefulness. The account will start with the work of Baddeley and Hitch (1974), which led to the assumption of a specific model of working memory. In this section however, we will not be concerned with the details of this or any alternative model, but rather with the general usefulness of the concept of working memory. Hence, some of the work described stems from investigators who assume a single relativ...
Table of contents
- Cover
- Half Title
- Title Page
- Copyright
- Contents
- List of contributors
- Preface
- 1. The concept of working memory
- 2. Covert processes and their development in short-term memory
- 3. A connectionist model of STM for serial order
- 4. Interactive processes in phonological memory
- 5. The representation of words and nonwords in short-term memory: Serial order and syllable structure
- 6. Nonword repetition, STM, and word age-of-acquisition: A computational model
- 7. Associations and dissociations between language impairment and list recall: Implications for models of STM
- 8. Auditory short-term memory and the perception of speech
- 9. The object-oriented episodic record model
- 10. Item, associative, and serial-order information in TODAM
- 11. How many words can working memory hold? A model and a method
- Author index
- Subject index