
eBook - ePub
Scripts, Plans, Goals, and Understanding
An Inquiry Into Human Knowledge Structures
- 266 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
Scripts, Plans, Goals, and Understanding
An Inquiry Into Human Knowledge Structures
About this book
First Published in 1977. In the summer of 1971, there was a workshop in an ill-defined field at the intersection of psychology, artificial intelligence, and linguistics. The fifteen participants were in various ways interested in the representation of large systems of knowledge (or beliefs) based upon an understanding process operating upon information expressed in natural language. This book reflects a convergence of interests at the intersection of psychology and artificial intelligence. What is the nature of knowledge and how is this knowledge used? These questions lie at the core of both psychology and artificial intelligence.
Trusted by 375,005 students
Access to over 1 million titles for a fair monthly price.
Study more efficiently using our study tools.
Information
1 Introduction
1.1 What this book is about
This book reflects a convergence of interests at the intersection of psychology and artificial intelligence. What is the nature of knowledge and how is this knowledge used? These questions lie at the core of both psychology and artificial intelligence. The psychologist who studies âknowledge systemsâ wants to know how concepts are structured in the human mind, how such concepts develop, and how they are used in understanding and behavior. The artificial intelligence researcher wants to know how to program a computer so that it can understand and interact with the outside world. The two orientations intersect when the psychologist and the computer scientist agree that the best way to approach the problem of building an intelligent machine is to emulate the human conceptual mechanisms that deal with language. There is no way to develop adequate computer âunderstandingâ without providing the computer with extensive knowledge of the particular world with which it must deal. Mechanistic approaches based on tight logical systems are inadequate when extended to real-world tasks. The real world is messy and often illogical. Therefore artificial intelligence (henceforth AI) has had to leave such approaches behind and become much more psychological (cf. Schank and Colby, 1973; Bobrow and Collins, 1975; Boden, 1976). At the same time, researchers in psychology have found it helpful to view people as âinformation processorsâ actively trying to extract sense from the continual flow of information in the complicated world around them. Thus psychologists have become more interested in machine models of real-world knowledge systems. The name âcognitive scienceâ has been used to refer to this convergence of interests in psychology and artificial intelligence (Collins, 1976).
This working partnership in âcognitive scienceâ does not mean that psychologists and computer scientists are developing a single comprehensive theory in which people are no different from machines. Psychology and artificial intelligence have many points of difference in methods and goals. Intellectual history, like political history, is full of shifting alliances between different interest groups. We mention this because for many commentators, the blood quickens when computers and human beings are associated in any way. Strong claims for similarity (e.g., Newell and Simon, 1972) are countered by extravagant alarms (e.g., Weizenbaum, 1976). Enthusiasts and horrified skeptics rush to debate such questions as whether a computer could ever be in love. We are not interested in trying to get computers to have feelings (whatever that might turn out to mean philosophically), nor are we interested in pretending that feelings don't exist. We simply want to work on an important area of overlapping interest, namely a theory of knowledge systems. As it turns out, this overlap is substantial. For both people and machines, each in their own way, there is a serious problem in common of making sense out of what they hear, see, or are told about the world. The conceptual apparatus necessary to perform even a partial feat of understanding is formidable and fascinating. Our analysis of this apparatus is what this book is about.
1.2 Knowledge: Form and Content
A staggering amount of knowledge about the world is available to human beings individually and collectively. Before we set out on a theory of knowledge systems, we ought to ask ourselves: knowledge about what? We must be wary of the possibility that knowledge in one domain may be organized according to principles different from knowledge in another. Perhaps there is no single set of rules and relations for constructing all potential knowledge bases at will. A desire for generality and elegance might inspire a theorist to seek a âuniversalâ knowledge system. But if you try to imagine the simultaneous storage of knowledge about how to solve partial differential equations, how to smuggle marijuana from Mexico, how to outmaneuveryouropponent in a squash game, how to prepare a legal brief, how to write song lyrics, and how to get fed when you are hungry, you will begin to glimpse the nature of the problems.
Procedures for intelligently applying past knowledge to new experience often seem to require common sense and practical rules of thumb in addition to, or instead of, formal analysis (Abelson, 1975). The prospects for the general theorist to cope with all the varied applications of common sense are especially dismal. Nevertheless, many artificial intelligence researchers take a generalist point of view. It is in the best tradition of mathematics (in which computer scientists are generally well trAlned) that great power is gAlned by separating form and content: the same system of equations may account for a great many apparently disparate phenomena. It is also a central tenet in computer science that generality is highly desirable. Turing's (1936) original principle of the general purpose machine has often been embraced as though the computer were (or soon would be) in practice a general purpose machine. The field of artificial intelligence is full of intellectual optimists who love powerful abstractions and who strive to develop all-embracing formalisms.
It is possible to be somewhat more pragmatic about knowledge, however. The five-year-old child learning to tie shoelaces need not in the process be learning anything whatsoever about mathematical topology. There is a range of psychological views on the nature of knowledge, and we shall say a little more about this in the next section. For now, we simply note that we will take a pragmatic view. We believe that the form of knowledge representation should not be separated too far from its content. When the content changes drastically, the form should change, too. The reader will encounter plenty of abstractions in this book, but each set of them will be pegged specifically to a particular type of real-world content. Where generalizing is possible, we will attempt to take advantage of it, but we will not try to force generality where it seems unnatural.
In order to adopt this attitude, we have set some boundaries on the type of knowledge we will to consider. Our focus will be upon the world of psychological and physical events occupying the mental life of ordinary individuals, which can be understood and expressed in ordinary language. Our knowledge systems will embody what has been called ânAlve psychologyâ (Heider, 1958)-the common sense (though perhaps wrong) assumptions which people make about the motives and behavior of themselves and others-and also a kind of ânaive physicsâ, or primitive intuition about physical reality, as is captured in Conceptual Dependency (CD) theory (Schank, 1972, 1975). This book goes well beyond CD theory, however. That theory provides a meaning representation for events. Here we are concerned with the intentional and contextual connections between events, especially as they occur in human purposive action sequences. This new stratum of conceptual entities we call the Knowledge Structure (KS) level. It deals with human intentions, dispositions, and relationships. While it is possible computers cannot actually experience such intentions and relationships, they can perfectly well be programmed to have some understanding of their occurrence and significance, thus functioning as smart observers. If our theory is apt, it will provide a model of the human observer of the human scene; it will also explain how to construct a computer observer of the human scene, and lead to the eventual building of a computer participant in the human world.
Often our emphasis will be on the nature of potential understanding of two or three sentences, story fragments, or longer stories. These provide a straightforward and helpful way to pose the major issues. Lurking beneath the surface, however, is an interest in the ingredients of personal belief systems about the world, which dispose people toward alternative social, religious, or political actions. One of us has a major interest in belief systems and ideologies (Abelson, 1973). This book is not directly addressed to that interest, but the concepts developed are a major part of that total effort.
What we will not present in this book is a general apparatus for attempting to represent any and all knowledge. We give no information retrieval methods of interest to library scientists. The reader with a passion for mathematics and/or logic will be disappointed. Likewise, anyone wondering, for example, whether we could get a computer to play squash or roll pasta dough should not wait with bated breath. The geometry of bouncing balls, the âfeelâ of dough texture, and many other aspects of human activities involve knowledge falling outside of our present boundaries. This is because (among other reasons) visual and kinesthetic processes cannot readily be represented in verbal form. However, a great deal of the human scene can be represented verbally, and we have no lack of things to work on.
1.3 Traditional Points of View
We have mentioned that our task lies at the intersection of psychology (more specifically, cognitive psychology and cognitive social psychology) and artificial intelligence. Since we are concerned with verbally expressible knowledge, there is also an overlap with linguistics. When one tries to work in a disciplinary intersection, one inevitably comes into conflict with the traditional standards, habits, and orientations of the parent disciplines. This is especially true when the disciplines correspond to university departments, breeding suspicion of out-groups (cf. Campbell, 1969). Here we briefly sketch some of these conflicts, which we have resolved somewhat differently from others working at the same intersection.
Psychology is a heterogeneous discipline. The major subdivisions are developmental, clinical, cognitive and social psychology, and psychobiology. It is surprising to the non-psychologist but familiar to all but the youngest generation of psychologists that cognitive psychology is a relatively new branch of study. American experimental psychology was dominated for so long by behaviorism-roughly, from 1935 to 1960 â that the study of mental processes lay almost entirely dormant while other branches of psychology were developing rapidly. Since mental events could not be observed directly, there was scientific resistance toward relying on them to explain anything, whatever the scientist's common sense might tell him. Introspective evidence was not regarded as objectively trustworthy.
Since 1960, there has been an enormous surge of careful experimental work on mental phenomena. Skinner notwithstandinq, human psychology could not seem to do without cognitive processes. Nevertheless, the methodological caution of the behaviorists was carried over into this resurgence. Acceptable scientific procedure called for quantitative response measurements such as accuracy of recall or choice reaction time when subjects were confronted with well-controlled stimulus tasks. In the verbal domain, stimulus control usually entailed repetitive trials on isolated verbal materials, deliberately avoiding meaningful connotations in the experimental situation. While recent experimental materials have not been as trivial as the old-fashioned nonsense syllables, neither have they been genuinely meaningful or even necessarily plausible. Experimental tasks are often unusual and/or unnatural in relation to tasks encountered daily by people in using language. For example, in a well-known experiment by Foss and Jenkins (1973), subjects listened to 48 sentences such as âThe farmer placed the straw beside the wagonâ, with instructions to press a key the instant they first heard the phoneme âbâ. In another well-known series of experiments by Anderson and Bower (1973), subjects heard 32 unrelated sentences such as âIn the park, the hippie kissed the debutanteâ, âIn the bank, the tailor tackled the lawyerâ, etc., and an hour later were asked to recall as many of them as they could. The artificiality of tasks such as the latter led Spiro (1975) to remark tartly, Why should a research subject integrate the to-be-remembered information with his or her other knowledge? The rote the information will play in his or her life can be summarized as follows: take in the information, hold it for some period of time, give it back to the experimenter in as close to the original form as possible, and then forget it forever. The information cannot be perceived as anything but useless to the subject in his or her life (given the common employment of esoteric or clearly fictional topics as stimulus materials). The information, even when not clearly fictional, is probably not true. In any case, the subject knows that the relative truth of the information has nothing to do with the purpose of the experiment. (p.11)
In complaining about the lack of meaningful context in experiments such as these, it is no doubt unfair to present them out of their context. The experimenters had serious purposes, and the data were of some interest/But since our needs are for a set of interrelated constructs to explain the process of natural understanding of connected discourse, this style of experimentation is both too unnatural and too slow. There has been a gradual increase in research with connected discourse as stimulus material (e.g., Bransford and Johnson, 1972; Kintsch, 1974; Frederiksen, 1975; Thorndyke, 1977)
but the field is still marked with a very cautious theoretical attitude. We are willing to theorize far in advance of the usual kind of experimental validation because we need a large theory whereas experimental validation comes by tiny bits and pieces. Our approach, in the artificial intelligence tradition, is discussed in Section 1.6.
If the research properties of experimental cognitive psychology are often unduly restrictive, the traditions in the field of linguistics are even more restrictive. Linguistics has concerned itself with the problem of how to map deep representations into surface representations (see Chomsky, 1965). After a long obsession with syntactically dominated deep representations, recent work in linguistics has oriented deep representations much more towards considerations of meaning (Lakoff, 1971 ; Clark, 1974). Despite this reo...
Table of contents
- Cover
- Half Title
- Full Title
- Copyright
- Preface
- Contents
- Dedication
- 1 Introduction
- 2 Causal Chains
- 3 Scripts
- 4 Plans
- 5 Goals
- 6 Themes
- 7 Representation of Stories
- 8 Computer Programs
- 9 A Case Study in the Development of Knowledge Structures
- Bibliography
- Author Index
- Subject Index
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn how to download books offline
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 990+ topics, weâve got you covered! Learn about our mission
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more about Read Aloud
Yes! You can use the Perlego app on both iOS and Android devices to read anytime, anywhere â even offline. Perfect for commutes or when youâre on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Yes, you can access Scripts, Plans, Goals, and Understanding by Roger C. Schank,Robert P. Abelson in PDF and/or ePUB format, as well as other popular books in Psychology & Cognitive Psychology & Cognition. We have over one million books available in our catalogue for you to explore.