Chapter One
What are Beliefs?
Patricia S. Churchland and Paul M. Churchland
Introduction
Beliefs, according to the tradition in philosophy, are states of mind that have the property of being about things ā things in the world, as well as abstract things, events in the past and things only imagined. A central problem is to explain how physical states of the brain can be about things; that is, what it is for brain states to represent. This is a puzzle not only for beliefs, but also for mental states more generally, such as fears, desires, and goals. In analytic philosophy, the main focus has been on language as the model for beliefs and for the relations among various kinds of beliefs. Although the linguistic approach produced some useful logical distinctions, little progress was made in solving the central representational problem. A newer approach starts from the perspective of the brain and its capacity for adaptive behavior. The basic aim is to address aboutness in terms of complex causal and mapping relations between the brain and world, as well as among brain states themselves, which result in the brainās capacity to represent things.
Beliefs: The Philosophical Background
According to the conventional wisdom in philosophy, beliefs are states of the mind that can be true or false of the world, and whose content is specified by a proposition, such as the belief that the moon is a sphere or that ravens are black. The sentence in italics is the proposition that specifies what the belief is about, and conveniently also specifies what would make it true ā the moonās being a sphere, for example. Because specificity concerning what is believed requires picking out a proposition, beliefs are called propositional attitudes. The āattitudeā part concerns one of many āattitudesā a person might have in relation to the proposition: believing it, or doubting it, or hoping that it is true, and so on.
The class of propositional attitudes generally, therefore, includes any mental state normally identified via a proposition, complete with subject and predicate, perhaps negation and quantifiers. Included in this class are some thoughts (Smith thinks that the moon is a sphere, but not Smith thought about life), some desires (Smith wants that he visits Miami, but not Smith wants love), some intentions (Smith intends that he makes amends, but not Smith intends to play golf), some fears (Smith fears that the tornado will tear apart his house, but not Smith fears spiders), perhaps even some sensory perceptions, such as seeing that the tree fell on the roof, but not seeing a palm tree. These contrasts are worthy of notice because by and large philosophers have focused almost exclusively on the propositional attitudes, and have neglected closely related states that are not propositionally specified. This neglect turns out to be a symptom of a fixation with language-like (linguaform) structures as the essence of beliefs, and of many cognitive functions more generally.
A useful and uncontroversial background distinction contrasts beliefs that one is currently entertaining (e.g., the mailman is approaching) and beliefs that are part of background knowledge and are not part of the current processing (e.g., wasp stings hurt). The latter is stored information about the way the world is, and can be retrieved, perhaps implicitly, when the need arises. Some philosophers have been puzzled about whether we can also be said to believe something that is inferrable from other propositions we do believe, but which has not been explicitly inferred.1 For example, until this moment, I have not considered the proposition that wasps do not weigh more than 100 pounds, but it does follow from other things I do believe about the wasps. Do I count it as among the beliefs I held yesterday? While the status of obviously inferrable propositions (are they really in my belief set or not?) is perhaps a curiosity, it is not particularly pressing. A more pressing question concerns how beliefs are stored as background information, and what is represented as we learn a skill by repeated practice, such as golfing or hunting or farming.
Like other propositional attitudes, beliefs have the property that philosophers refer to as intentionality. Appearances notwithstanding, intentionality has nothing special to do with intending. Franz Brentano, in his 1874 book, Psychology from an Empirical Standpoint, adopted the expression and characterized three core features of intentionality: (1) the object of the propositional attitude need not exist (e.g., Smith believes that the Abominable Snowman is ten feet tall.). In contrast, a person cannot kick a nonexistent thing such as the Abominable Snowman. (2) A person can believe a false proposition, such as that the moon has a diameter of about twenty feet. Finally, (3) a person can believe a proposition P but yet not believe a proposition Q to which P is actually equivalent. Hence Smith may believe that Horace is his neighbor, yet not believe that The Night Stalker in his neighbor, even though Horace is one and the same man as The Night Stalker. This is because Smith might not know that Horace is The Night Stalker. By contrast, if Smith shot The Night Stalker he ipso facto shot Horace, whether he knew it or not. To take a different example, this time involving the propositionās predicate, Jones might believe the proposition Sam smokes marijuana without believing the proposition Sam smokes cannabis sativa. But if Sam smokes marijuana, he ipso facto smokes cannabis sativa.
Brentanoās choice of the term āintentional,ā though it may seem perverse to contemporary ears, was inspired by the Latin word, tendere, meaning to point at, to direct toward. Brentano chose a word that would reflect his preoccupation, namely, that representations are about things; they point beyond themselves. Needless to say, his word choice has been troublesome owing to its phonetic similarity to intentions to do something, which may or may not be intentional in the Brentano sense. More recent research has sometimes avoided the inevitable confusion by just abandoning the word āintentionalā in favor of āaboutnessā or ādirectedness.ā
Brentano was convinced that these three features of intentionality were the mark of the mental, meaning that these features demarcated an unbridgeable gulf between purely physical states, such as kicking a ball, and mental states such as wanting to kick the Abominable Snowman. As Brentano summed it up (1874, p. 89):
This intentional inexistence is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it. We can, therefore, define mental phenomena by saying that they are those phenomena which contain an object intentionally within themselves.
For Brentano, the solution to the problem of how states can represent is that they are mental, not physical, and the mental is just like that. He forthrightly accepted the Cartesian hypothesis according to which the world of the mental is completely different from the world of the physical.
Unable to ignore developments in the biological sciences, philosophers in the second half of the twentieth century found themselves in a difficult dilemma. On the one hand, they accepted Brentanoās threefold characterization of intentionality, but on the other hand, science had rendered unacceptable the idea of a mental stuff that supposedly confers the āmagicā of intentionality. If mental states are in fact states of the physical brain, then a core thesis of Brentano was wrong: some physical phenomena ā some patterns of brain activity, for example ā do exhibit intentionality. So philosophers sought a coherent story according to which intentionality is the mark of beliefs, but beliefs are not states of spooky stuff.
Beliefs as Linguaform Structures
How can neuronal activity be about anything? Roughly, one popular answer is to say that neuronal activity per se cannot be about anything. Nevertheless, thoughts, formulated in sentences of a language, can be. How does this work? Fodor (1975) provided an elaborate defense of the widely accepted idea that beliefs are linguaform structures in the language of thought. Since many cognitive scientists in the 1980s assumed a symbol-processing model of cognition, and since language-use was assumed to be essentially symbol manipulation, the language-of-thought hypothesis was an appealing platform to support the prevailing assumptions.
Whence the intentionality of thoughts? Semantics, and representational properties in general, were supposed to be the outcome, somehow, of the complex syntax governing the formal symbols. As some philosophers summed up the idea, if the syntax of a formal system is set up right, the semantics (the aboutness) will take care of itself (Haugeland 1985). How would that syntax come to be set up just right? As Fodor saw it, Mother Nature provided the innate language-of-thought, and thus intentionality came along with the rest of our genetically specified capacities.
Does this not mean that at some level of brain organization intentionality is explained in terms of neural properties? Surprisingly, philosophers here chorused a resounding āNo.ā The grounds for blocking any possible explanation were many and various, but they all basically boiled down to a firm conviction that cognitive operations are analogous to running software on a computer. Just as no one explains in terms of hardware how a mail application works, language-use cannot be explained in terms of biological properties of the nervous system. Software, the story went, can be run on many different kinds of hardware,2 so what we want to know is the nature of the software. The hardware ā the details of how the brain implements the software ā is largely irrelevant because hardware and software levels are independent. Still a popular analogy, the hardware/software story encouraged philosophers to say that neural properties are sheerly causal mechanisms that run intentional software; they are not themselves intentional devices. From another angle, the point was that neurobiological explanations cannot be sensitive to the logical relations between cognitive states or to meaning or āaboutness.ā They capture only causal properties. (For criticism of this perspective, see Rumelhart, Hinton, and McClelland 1986; P.M. Churchland 1989; Churchland and Sejnowski 1989.)
Dualism, pushed out of one place, in effect resurfaced in another. The Cartesian dualism of substances was replaced by a dualism of āpropertiesā ā the idea that propositional attitudes are at āthe software levelā and as such they cannot be explained by neurobiology. Property dualism, resting on a dubious hardware/software analogy, was shopped as scientifically more respectable than substance dualism. In sum, on Brentanoās view, the āmagicā of intentionality was explained by the hypothesis that the nonphysical mind is in the āaboutnessā business. For the property dualists, the magic of intentionality was passed on to the āaboutnessā of sentences, either in the language-of-thought, or, failing that, in some learned language.
Postulating an essential dependency between beliefs and language-use spawned its own range of intractable puzzles (P.S. Churchland 1986). One obvious problem concerns nonverbal humans and other animals. If having beliefs requires having a language, then preverbal and nonverbal humans, as well as nonverbal animals, must be unable to have beliefs, or only have beliefs in a metaphorical, āas if,ā sense. (For a defense of this view, see a highly influential philosopher, Donald Davidson 1982 and 1984. See also Brandom 1994 and Wettstein 2004.) The idea that only verbal humans have beliefs has been difficult to defend, especially since nonverbal children and animals regularly display knowledge of such things as the whereabouts of hidden or distant objects, about what can be seen from anotherās point of view and what others may know (Akhtar and Tomasello 1996; Call 2001; Tomasello and Bates 2001). Human language may be a vehicle for conveying signals to others about these states, but beliefs in general are probably not unavoidably linguaform (Tomasello 1995). From the broader perspective of animal behavior, linguistic structures such as subject-predicate propositions are not the most promising model for representations in general, and not even for beliefs in particular. Most probably, linguaform structures are one means ā albeit one impressively flexible and rich means ā whereby those representations can be cast into publicly accessible form (Lakoff 1987; Langacker 1990; Churchland and Sejnowski 1992; P.M. Churchland 2012).
A related problem is that even in verbal humans, some deep or background beliefs may be expressible in language only roughly or approximately. For example, background beliefs about social conventions or complex skills, though routinely displayed in behavior, may well be difficult to articulate in language. Social conventions about how close to stand to a new acquaintance or how much to mimic his gestures can be well understood, but may be followed nonconsciously (Dijksterhuis, Chartrand, and Aarts 2007; Chartrand and Dalton 2008).
A further difficulty for the ābeliefs are linguaformā approach is that it embraces a basic discontinuity between propositional attitudes (such as beliefs) on the one hand, and other mental representations (feeling hungry, seeing a bobcat, wanting water) on the other. Having embraced such a discontinuity requires postulating special mechanisms to account for such ordinary processes as how we acquire beliefs about the world from perceptual experience, and how feelings, motives, and emotions influence beliefs.
A slightly different approach, favored mainly by Dennett (1987), that avoids some of these problems, is called interpretationalism (see also Davidson 1982, 1984). The core of his idea is that if I can explain and predict the behavior of a system by attributing to it beliefs and other propositional attitudes, suffice to say it actually has those representational states. To adopt such an interpretation, according to Dennett, is to ātake the intentional stanceā towards the creature, and there is nothing more to intentionality than being the target of the intentional stance.
Consistent with this view, Dennett opined that nothing in the device needs to correspond to the structural features of the proposition specifying the belief (that is, the subject, predicate, quantifiers, and so forth), since, after all, your belief is just my interpretation of your behavior that is predictively more successful than any other strategy I might have used. Dennett emphasizes that there are differences in the degree of sophistication of beliefs, and hence a humanās beliefs can safely be assumed to be more sophisticated than those of a leech. As he sees it, exactly how a device must be structured and organized to have behavior that invites interpretation in terms of very fancy beliefs is a question for empirical research, but not one that will yield any new insights into the nature of intentionality.
A major shortcoming of Dennettās approach is that it does not address, except in the most general terms, how internal states come to represent the external world or the brainās own body. It typically considers such details as irrelevant to the problem since whatever the brainās (or computerās) inner structure, if I can best predict its behavior by attributing beliefs, then beliefs it...