Part I
The Framework
1. The Explanation Game
Why does the explanation process deserve special focus? The answer, in short, is that explanation is at the heart of intelligence. Explanation is the process by which we make sense of the world around us. Before we can even begin to form intelligent plans or make intelligent predictions, we must understand the causes of the things we observe. In this chapter, which is adapted from Schankās (1986) book on explanation patterns, the central role that the explanation process plays in human thinking is described.
When this chapter first appeared it was, among other things, Schankās charge to fellow researchers to take up the study of explanation. One way to read the chapters of this book that were contributed by Schankās students is as a description of what happened when they took up his charge.
1.1 The Turing Test
The question of whether machines can think is perhaps a bit tired at this point in the history of computing, but the question of what would be required in order to make machines think is just beginning to be explored. Of course, each question relates to the other, so, in order to begin the process of building a thinking machine, we must consider not only if such a project is possible, but also what elements of the human thinking process are inextricably bound up with our sense of what we mean when we talk about a thinking machine. To build a machine with artificial intelligence (AI), we must come to understand, in a profound way, what it means to have intelligence. For years, researchers in AI could content themselves with adding yet another cute feature to their programs, allowing machines to do new things, one by one. But, it is becoming increasingly clear that to really make intelligent machines, as opposed to machines that exhibit one or two aspects of intelligence, one must attack the basic issues of the nature of human thought and intelligence head-on.
There have, of course, been many discussions of machine thought, by far the most famous of these by the British mathematician, Alan Turing (1963), when computers were in their infancy. Turing proposed a test, which he called the Imitation Game, that has become a common, but often misunderstood, way of judging the ability of a machine to understand. Turingās major argument was that the question of whether a machine can think is meaningless. He suggested the following alternative: If a person failed to distinguish between a man imitating a woman (via teletype) and a computer imitating a man imitating a woman, then the machine succeeded in the Imitation Game. Turing argued that success in this game was, in the long run, inevitable.
What Turing actually said, and what has been made of it, has not always been the same thing. He did not say that machines would be thinking if they played the Imitation Game successfully. He merely said that successful imitation was the real issue for computer scientists. In the subsequent history of AI, two general lines of argument on this issue have developed. One is that machines will never duplicate human thought processes; the other is that no matter how well machines imitate human behavior, they cannot be said to truly understand in the same way as a human being. To put this another way, the claims made are first, that the Turing test will never be passed; and second, that even if it is passed, it does not prove that machines understand. Of course, Turing himself disagreed with the first claim; he regarded the second one as meaningless.
It is now more than thirty years since Turingās paper. With the exception of Kenneth Colby (1973), a psychiatrist who found that other psychiatrists could not tell his computer version of a paranoid from the real thing, no researcher has claimed to have created a program that would pass the Turing test. And, no matter how many times people have affirmed that Turing was correct in his assessment of the validity of the question, it has failed to go away. Further, there is little reason to believe that it will go away. No achievement in building intelligent software can dispel it because there are always those who believe that there is something more to the nature of intelligence than any amount of software displays.
Critics of artificial intelligence seem always to be able to āmove the lineā that must be crossed. āIt would be intelligent if it did Xā is nearly always followed by āNo, that was wrong; it would be intelligent if it did Y,ā when X has been achieved. But the problem of the ultimate possibility of an intelligence that is different from our own and possibly even superior to our own, embodied in something not made of flesh and blood, will not be determined by critics with arbitrary standards. It is time to look again at the twin questions of the criteria for the ultimate thinking ability of computer programs and the nature of what is meant by understanding.
1.2 On Men and Women
One of the interesting, if not serendipitous, facets of Turingās Imitation Game is that, in its initial conception, the problem is to find the difference between a man and a woman via teletype. The seeming implicit assumption is that there are differences between men and women that are discernible by teletype. On the other hand, given that the problem is merely to get the computer to do as well at the Imitation Game as the man did, and assuming that there is no difference between men and women that would be recognizable via teletype, the task of the machine is to duplicate a human in its answers. Thus, Turingās test doesnāt actually depend on men and women being discernibly different. But, the nature of what it means to understand may best be illustrated by that distinction.
To see what we mean, let us consider the question of whether men can really understand women (or alternatively, whether women can really understand men). It is common enough in everyday experience for men and women to both claim that they really do not understand their opposite number. What can they mean by this? And, most important, how is what they mean by it related to the problem of determining whether computers can understand?
When the claim is made that men and women are really quite different (mentally, not physically), what is presumably meant is that they have different beliefs, different methods of processing information, different styles of reasoning, different value systems, and so on. (It is not our point here to comment on the validity of these assertions. We are simply attempting to use the principle of these assertions in our argument. These same assertions might be made about different ethnic groups, cultures, nations, and so on; we are simply using Turingās domain.)
The claim that we assume is not being made by such assertions is that men and women have different physical instantiations of their mental processes. (Of course, it is possible that men and women do have brains that differ physically in important respects, but that would be irrelevant for this argument.) So, what is it that makes men and women feel they have difficulty understanding each other? Empathy. Understanding involves empathy. It is easier to understand someone who has had similar experiencesāand who, because of those experiences, has developed similar values, beliefs, memory structures, rules-of-thumb, goals, and ideologiesāthan to understand someone with very different types of experiences.
Understanding consists of processing incoming experiences in terms of the cognitive apparatus one has available. This cognitive apparatus has a physical instantiation (the brain, or the hardware of the computer) and a mental instantiation (the mind, or the software of the computer). When an episode is being processed, a person brings to bear the totality of his or her cognitive apparatus to attempt to understand it. What this means in practice is that people understand things in terms of their particular memories and experiences. People who have different goals, beliefs, expectations and general life-styles will understand identical episodes quite differently.
Therefore, no two people understand in exactly the same way or with exactly the same result. The more different people are from one another, the more their perception of their experiences will differ. On the other hand, when people share certain dimensions of experience, they will tend to perceive similar experiences in similar ways. Thus, men tend to understand certain classes of experiences in ways that are different from womenās.
It is unlikely that an experience that in no way bears upon oneās sex will be understood differently by men and women. Recall that the assumption here is that the baseline cognitive apparatus is the same regardless of sex. Any experience that does relate to the sex of the observer in some way will be processed differently. This can involve obvious issues, such as the observance of an argument between a man and a woman. There, we would expect a man to observe the episode from the point of view of the man and a woman to observe it from the point of view of the woman. In addition, such identification with different characters in a situation can extend to observations of situations in which the feuding characters are of the same sex, but one displays attributes more traditionally male and the other displays traditional female behavior. Identification, and thus perception, can thus be altered by oneās understanding of the goals, beliefs, or attitudes underlying, or perceived to underlie, the behavior of the characters in an episode that one is observing.
Thus, for example, oneās perception of the validity and purpose behind a war can be altered by whether one is the mother of a son who is about to be drafted or whether one previously fought in a war and found it an ennobling experience. In general, oneās sense of what is important in life affects every aspect of oneās understanding of events.
The claim then is that men and women, as examples of one division of human beings, do not, and really cannot, understand each other. The same argument can be put forward, with more or less success, depending on the issue under consideration, with respect to Arabs and Israelis, or intellectuals and blue-collar workers. In each of these cases, differing values can cause differing perceptions of the world.
1.3 On Computers and People
Now let us return to the question of whether computers can understand. What exactly does it mean to claim that an entityāeither a person or a machineāhas understood? On the surface, it appears that there are two different kinds of understanding to which people commonly refer. We talk about understanding another human being, or animal, and we talk about understanding what someone has told us, or what we have seen or read. This suggests that there are actually two different issues to confront when we talk about computers understanding people. One is to determine whether computers will ever really understand people, in the deep sense of being able to identify with them, or empathize with them. The other is whether computers will be able to comprehend a news story, interact in a conversation, or process a visual scene. This latter sense of understanding comprises the arena in which AI researchers choose to do battle. Often the critics of AI (e.g., Dreyfuss, 1972; Weizenbaum, 1976) choose to do battle over the former.
In fact, these two seemingly disparate types of understanding are really not so different. They are both aspects of the same continuum. Examining both of them allows us to see what the ultimate difficulty in AI is likely to be and what problems AI researchers will have to solve in order to create machines that understand.
Weizenbaum (1976) claimed that a computer will never be able to understand a shy young manās desperate longing for love, expressed in his dinner invitation to the woman who is the potential object of his affection, because a computer lacks experience in human affairs of the heart. In some sense, this seems right. We cannot imagine that machines will ever achieve that sort of empathy, because we realize how difficult it is for people who have not been in a similar situation to achieve that level of understanding. In other words, Weizenbaumās statement is, in some sense, equivalent to saying that no person can understand another person without having experienced feelings and events similar to those one is attempting to understand. Where does this leave the poor computer? Where does this leave a child? Where does this leave a happily married man who met his wife when they were both small children and married her when they were graduated from high school, before he ever had to cope with asking her out to dinner?
Weizenbaum is correct as far as he goes, but he misses a key point. Understanding is not an all-or-none affair. People achieve degrees of understanding in different situations, depending upon their level of familiarity with those situations. Is it reasonable to expect a level of empathy from a machine that is greater than the level of empathy we expect from human beings?
The important question for researchers in AI, psychology, or philosophy, is not whether machines will ever equal humans in their understanding capabilities. The important scientific questions are about people, not computers. What processes does a person go through when he or she is attempting to understand? With regard to the Turing test, we therefore need to investigate how our recognition of that understanding involves empathy and the ability to relate, and then to draw upon common experiences. Looking at the question this way will help us to better understand the nature of mental processes.
1.4 The Nature of Understanding
The easiest way to understand the nature of understanding is to think of it in terms of a spectrum. At the far end of the spectrum we have what we call Complete Empathy. This is the kind of understanding that might obtain between twins, very close brothers, very old friends, and other such combinations of very similar people.
At the opposite end of the spectrum we have the most minimal form of understanding, which we call Making Sense. This is the point where events that occur in the world can be interpreted by the understander in terms of a coherent (although probably incomplete) picture of how those events came to pass.
Now let us step back for a moment. Before we complete this spectrum, it would be worthwhile to discuss both what the spectrum actually represents and what the relevance of current AI research is to this spectrum.
There is a point on this spectrum that describes how an understander copes with events outside his or her control. The end points of this spectrum can be loosely described as, on the one hand, the understander thinking, Yes, I see what is going on here, it makes some sense to me; and, on the other hand, thinking, Of course, thatās exactly what I would have done, I know precisely how you feel.
In our research (e.g., Schank, 1982; Schank & Riesbeck 1981) we have been concerned with the nature of understanding because we are trying to get computers to read and process stories. In the course of that research, we have considered various human situations that we wished to model. We built various knowledge structures that attempt to characterize the knowledge people have of various situations. The restaurant script, for example (Schank & Abelson, 1977), was used in an attempt to understand restaurant stories, and it was that script that prompted Weizenbaumās criticism about understanding love in a restaurant. Since that earlier research, we have come to realize that these knowledge structures function best if t...