1COMPUTATIONAL PSYCHOLOGY AND INTERPRETATION THEORY
Hilary Putnam
I once got into an argument after dinner with my friend Zenon Pylyshyn. The argument concerned the following assertion which Pylyshyn made: âcognitive psychology is impossible if there is not a well-defined notion of sameness of content for mental representationsâ. It occurred to me later that the reasons I have for rejecting this assertion tie in closely with Donald Davidsonâs well-known interests in both meaning theory and the philosophy of mind. Accordingly, with Zenonâs permission and (I hope) forgiveness, I have decided to make my arguments against his assertion the subject of this paper.
Mental Representation
Let us consider what goes on in the mind when we think âthere is a tree over thereâ, or any other common thought about ordinary physical things. On one model, the computer model of the mind, the mind has a âprogramâ, or set of rules, analogous to the rules governing a computing machine, and thought involves the manipulation of words and other signs (not all of this manipulation âconsciousâ, in the sense of being able to be verbalized by the computer). This model, however, is almost vacuous as it stands (in spite of the heat it generates among those who do not like to think that a mere device, such as a computing machine, could possibly serve as a model for something as special as the human mind). It is vacuous because the program, or system of rules for mental functioning, has not been specified; and it is this program that constitutes the psychological theory. Merely saying that the correct psychological theory, whatever it may be, can be represented as a program (or something analogous to a program) for a computer (or something analogous to a computer) is almost empty; for virtually any system that can be described by a set of laws can at least be simulated by a computer. Anything from Freudian depth psychology to Skinnerian behaviorism can be represented as a kind of computer program.
Today, however, computer scientists working in âartificial intelligenceâ, and cognitive psychologists thinking about reference, semantic representation, language use, and so on, have a little more specific hypothesis in mind than the almost empty hypothesis that the mind can be modelled by a digital computer. (Even that hypothesis is not wholly empty because it does imply something: the causal structure of mental processes; it implies that they take place according to deterministic or probabilistic rules of sequencing according to a finite progam.) The further hypothesis to which workers on computing machines and cognitive psychologists have been converging is this: that the mind thinks with the aid of representations. There seem to be two different ideas, actually, which are both involved in talk of ârepresentationsâ today.
The first idea, based on experience with trying to program computers to simulate intelligent behaviour is that thinking involves not just the manipulation of arbitrary objects or symbols, but requires the manipulation of symbols that have a very specific structure, the structure of a formalized language. The experience of computer people was that the most interesting and successful programs in âartificial intelligenceâ typically turned out to involve giving the computing machine something like a formalized language and a set of rules for manipulating that formalized language (âreasoningâ in the language, so to speak).
The second idea associated with the term ârepresentationâ is that the human mind thinks (in part) by constructing some kind of a âmodelâ of its environment: a âmodel of the worldâ. This âmodelâ need not, of course, literally resemble the world. It is enough that there should be some kind of systematic relation between items in the representational system and items âout thereâ, so that what is going on âout thereâ can be read off from its representational system by the mind.
Once a reference definition has been given for a formalized language, a set of sentences in that language can serve as a ârepresentational systemâ or âmodel of the worldâ.
Suppose, for example, we wish to represent the fact that the city of Paris is bigger than the city of Vienna. If we have a predicate, say F, which represents the relation bigger than (i.e. if the open sentence which we write in the formal notation as âFxyâ is correlated to the relation which holds between any two things if and only if they are both cities and the first is larger â in, say, population â than the second), and if we have âindividual constantsâ or proper names, say, a and b, which represent the cities of Paris and Vienna (i.e. âaâ is correlated to Paris and âbâ is correlated to Vienna by the reference definition for the language), then we can represent the fact that Paris, is a bigger city than Vienna by just including in our list of accepted sentences (our âtheory of the worldâ) the sentence âFabâ. In a similar way, any state of affairs, however complex, that can be expressed using the predicates, proper names, and logical devices of the formal language can be asserted to obtain by including in the âtheory of the worldâ the formula that represents that state of affairs.
When our ârepresentational systemâ is itself a theory, and when our method of employing our representational system involves making formal deductions, we see that one and the same object â the formalized language, including the rules for deduction â can be the formalized language that computer scientists have been led to postulate as the brain or mindâs (the difference does not appear particularly significant, from this perspective) medium of computation, and, simultaneously, the medium of representation. The mind uses a formalized language (or something significantly like a formalized language) both as medium of computation and medium of representation. This may be called the working hypothesis of cognitive psychology today.
Part of this working hypothesis seems to me certainly correct. I believe that we cannot account at all for the functioning of thought and language without regarding at least some mental items as representations. When I think (correctly) âthere is a tree in front of meâ, the occurrence of the word âtreeâ in the sentence I speak in my mind is a meaningful occurrence and one of the items in the extension of that occurrence of the word âtreeâ is the very tree in front of me. Moreover, the open sentence âx is in front of meâ is correlated (in the correct semantics for my language) with the relational property of being in front of me, and the entire sentence âthere is a tree in front of meâ is, by virtue of these and similar facts, one which is true if and only if there is a tree in front of me.
Where there is room for psychologists to differ is over how many mental items are representations, how useful it is to postulate a large and complex unconscious system of representations in order to explain conscious thought and intelligent action, etc.
The Verificationist Semantics of âMentaleseâ
So far what I have said is in line with the thinking of Pylyshyn and other âpropositionalistâ cognitive psychologists. For the sake of the argument, we shall assume all this is right. Of course, the actual story may be much more complicated. The mind may employ more than one formalized language (or, rather, formalized-language-analog). Different parts of the brain may compute in different âmediaâ. And both sentence-analogs and image-analogs may be used in the actual computational procedures, along with things that are neither. But let us assume the best case for Pylyshynâs view: a mind which does all its computing in one formalized language.
In what does the mindâs understanding of its own medium of computation consist? It will do no good to say, as Fodor (1975) has, that we should not apply the word âunderstandâ to âmentaleseâ itself. (âMentaleseâ is a name for the hypothetical formalized- language-analog in the brain.) For âmentaleseâ and âformalized- language-in-the-brainâ are metaphors. They may be scientifically useful and rich metaphors; but as metaphors they are inseparable from the notion of understanding. Something cannot literally be a language unless it can be understood; and something cannot be a language-analog unless there is a suitable understanding-analog. If some representations in the brain are sentence-analogs and predicate-analogs, then what is the corresponding understanding- analog?
The answer, I suggest, is this: the brainâs âunderstandingâ of its own âmedium of computation and representationâ consists in its possession of a verificationist semantics for the medium, i.e. of a computable predicate1 which can represent acceptability, or warranted assertibility, or credibility. Idealizing, we treat the language as interpreted (in part) via a set of rules which assign degrees of confirmation (i.e. subjective probabilities) to the sentence-analogs relative to experiential inputs and relative also to other sentence-analogs. Such rules must be computable; and their âpossessionâ by the mind/brain/machine consists in its being âwiredâ to follow them, or having come to follow them as a result of learning. (I do not assume that mentalese must be innate, or that it must be disjoint from the natural language the speaker has acquired.)
But why a verificationist semantics? Why not a meaning theory in Davidsonâs sense?
Obviously, if we interpret mentalese as a âsystem of representationâ we do ascribe extensions to predicate-analogs and truth conditions to sentence-analogs. But the âmeaning theoryâ which represents a particular interpretation of mentalese is not psychology. In fact, if we formulate it as Davidson might, its only primitive notion is âtrueâ, and âtrueâ is not psychological notion. To spell this out: the meaning theory yields such theorems (âT- sentencesâ) as (pretend that mentalese is English): â âSnow is whiteâ is true in mentalese if and only if snow is whiteâ. This contains no psychological vocabulary at all.
We might try to say âwell, the understanding consists in the brainâs knowing the T-sentences of the meaning theoryâ. But the notion of knowing cannot be a primitive notion in sub-personal cognitive psychology.2
Suppose we try to say: the mind understands without using representations what it is for snow to be white, and it knows the representation âsnow is whiteâ is true if and only if that state of affairs holds. Not only does this treat the mind as something that âknowsâ things, instead of analyzing knowing into more elementary and less intentional processes, but it violates the fundamental assumption of cognitive psychology, that understanding what states of affairs are, thinking about them, etc., cannot be done without representations. At bottom, we would be stuck with the myth of comparing representations directly with unconceptualized reality.
On the other hand, if we say, âthe brainâs/mindâs use of the sentence âSnow is whiteâ (or the corresponding sentence-analog) is such as to warrant the interpretation that âSnow is whiteâ is true in mentalese if and only if snow is white, and this is what it means to say that the brain (implicitly) âknowsâ the T-sentenceâ, then we do not give any theory of what that âuseâ consists in. This is what a verificationist semantics gives (and, as far as I can see, what only a verificationist semantics gives). I suggest, then, that verificationist semantics is the natural semantics for functionalist (or âcognitiveâ) psychology. Such a semantics has a notion of âbeliefâ (or âdegree of beliefâ) which is what makes it cognitive; at the same time it is a computable semantics, which is what makes it functionalist.
Of course, we want the semantics to connect with action, and this means that the model must incorporate a utility function as well as a degree of confirmation function. This function, too, must be computable (or, strictly speaking, semi-computable). This idealization is, of course, severe: we are assuming that the belief- analog (represented by the degree-of-confirmation function) and the preference-analog (represented by the utility function) are both fully consistent. The actual (neurologically realized) analogs of both belief and preference (or belief-representations and preference- representations) may well be inconsistent, as long as there are procedures for resolving the inconsistencies when practical decisions have to be made. In a terminology used by Reichenbach in another context, consistency may be âde faciendo and not de factoâ. What significance this has for philosophy of mind, I shall discuss briefly at the end of this paper.
For now, the problem is this: if the brainâs semantics for its medium of representation is verificationist and not truth-conditional, then what happens to the notion of the âcontentâ of a mental representation?
Two Ruritanian Children
Imagine that there is a country somewhere on earth called Ruritania. In this country let us imagine that there are small differences between the dialects which are spoken in the north and in the south. One of these differences is that the word âgrugâ means silver in the northern dialect and aluminium in the southern dialect. Imagine two children, Oscar and Elmer, who grow up in Ruritania. They are as alike in genetic constitution and environment as you please, except that Oscar grows up in the south of Ruritania and Elmer grows up in the north of Ruritania. Imagine that in the north of Ruritania, for some reason, pots and pans are normally made of silver, whereas in the south of Ruritania pots and pans are normally made of aluminium. So northern children grow up knowing that pots and pans are normally made of âgrugâ, and southern children grow up knowing that pots and pans are normally made of âgrugâ.
We may suppose that Oscar and Elmer have the same âmental representationâ of âgrugâ, that they have the same beliefs in connection with grug, etc. Of course some of these beliefs will differ in meaning even if they are identical in verbal and mental representation. For example, when Oscar believes âmy mother has grug pots and pansâ and when Elmer believes âmy mother has grug pots and pansâ the indexical word âmyâ refers to different persons, and hence the term âmy motherâ refers to different mothers. But unless such small differences in collateral information are already enough to constitute a difference in the content of the mental representation (in which case it would seem that the ordinary distinction between the meaning of a sign and collateral information that we have in connection with the sign has been wholly abandoned),3 then it would seem that we should say that the content of the mental representation of âgrugâ is exactly the same for Oscar and for Elmer at this stage in their lives.
I do not mean to suggest that the word âgrugâ has the same meaning in Oscarâs idiolect as it does in Elmerâs idiolect at this stage; Iâve argued elsewhere (Putnam, 1975a) that the difference in reference in the two communities should be regarded as infecting the speech of the individual speakers. To spell this out: when Oscar tries to determine what is grug he will ultimately have to rely on âexpertsâ. These experts need not necessarily be scientists, he may simply ask his parents (who may in turn consult store owners or even scientists). But the point is that since the extension of âgrugâ is in fact different in the two communities, and since, on the theory of meaning that I have defended in other places, difference in extension constitutes difference of meaning, and since extension is fixed collectively and not individually, it ends up that the meaning of the word âgrugâ in the idiolects of Oscar and Elmer is not the same even though there is nothing âpsychologicalâ, nothing âin their headsâ, which constitutes the difference in meaning. Meanings arenât in the head. There is a difference in the meaning of the word âgrugâ in this case; but it is in the reference of the word, as objectively fixed by the practices of the community, and not in the conceptions of grug entertained by Oscar and Elmer.
But the concept of content that Pylyshyn is interested in and that Chomsky4 has expressed an interest in is one that would factor out such objective differences in extension. What Pylyshyn is looking for is a notion of the content of a mental representation in which âwaterâ on earth and âwaterâ on Twin Earth would be said to have the same content for speakers...