Artificial Intelligence
eBook - ePub

Artificial Intelligence

The Case Against

  1. 254 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Artificial Intelligence

The Case Against

About this book

The purpose of this book, originally published in 1987, was to contribute to the advance of artificial intelligence (AI) by clarifying and removing the major sources of philosophical confusion at the time which continued to preoccupy scientists and thereby impede research. Unlike the vast majority of philosophical critiques of AI, however, each of the authors in this volume has made a serious attempt to come to terms with the scientific theories that have been developed, rather than attacking superficial 'straw men' which bear scant resemblance to the complex theories that have been developed. For each is convinced that the philosopher's responsibility is to contribute from his own special intellectual point of view to the progress of such an important field, rather than sitting in lofty judgement dismissing the efforts of their scientific peers.

The aim of this book is thus to correct some of the common misunderstandings of its subject. The technical term Artificial Intelligence has created considerable unnecessary confusion because of the ordinary meanings associated with it, and for that very reason, the term is endlessly misused and abused. The essays collected here all aim to expound the true nature of AI, and to remove the ill-conceived philosophical discussions which seek answers to the wrong questions in the wrong ways. Philosophical discussions and decisions about the proper use of AI need to be based on a proper understanding of the manner in which AI-scientists achieve their results; in particular, in their dependence on the initial planning input of human beings.

The collection combines the Anglo-Saxon school of analytical philosophy with scientific and psychological methods of investigation. The distinguished authors in this volume represent a cross-section of philosophers, psychologists, and computer scientists from all over the world. The result is a fascinating study in the nature and future of AI, written in a style which is certain to appeal and inform laymen and specialists alike.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Artificial Intelligence by Rainer Born in PDF and/or ePUB format, as well as other popular books in Computer Science & Artificial Intelligence (AI) & Semantics. We have over one million books available in our catalogue for you to explore.

1COMPUTATIONAL PSYCHOLOGY AND INTERPRETATION THEORY

Hilary Putnam
I once got into an argument after dinner with my friend Zenon Pylyshyn. The argument concerned the following assertion which Pylyshyn made: ‘cognitive psychology is impossible if there is not a well-defined notion of sameness of content for mental representations’. It occurred to me later that the reasons I have for rejecting this assertion tie in closely with Donald Davidson’s well-known interests in both meaning theory and the philosophy of mind. Accordingly, with Zenon’s permission and (I hope) forgiveness, I have decided to make my arguments against his assertion the subject of this paper.

Mental Representation

Let us consider what goes on in the mind when we think ‘there is a tree over there’, or any other common thought about ordinary physical things. On one model, the computer model of the mind, the mind has a ‘program’, or set of rules, analogous to the rules governing a computing machine, and thought involves the manipulation of words and other signs (not all of this manipulation ‘conscious’, in the sense of being able to be verbalized by the computer). This model, however, is almost vacuous as it stands (in spite of the heat it generates among those who do not like to think that a mere device, such as a computing machine, could possibly serve as a model for something as special as the human mind). It is vacuous because the program, or system of rules for mental functioning, has not been specified; and it is this program that constitutes the psychological theory. Merely saying that the correct psychological theory, whatever it may be, can be represented as a program (or something analogous to a program) for a computer (or something analogous to a computer) is almost empty; for virtually any system that can be described by a set of laws can at least be simulated by a computer. Anything from Freudian depth psychology to Skinnerian behaviorism can be represented as a kind of computer program.
Today, however, computer scientists working in ‘artificial intelligence’, and cognitive psychologists thinking about reference, semantic representation, language use, and so on, have a little more specific hypothesis in mind than the almost empty hypothesis that the mind can be modelled by a digital computer. (Even that hypothesis is not wholly empty because it does imply something: the causal structure of mental processes; it implies that they take place according to deterministic or probabilistic rules of sequencing according to a finite progam.) The further hypothesis to which workers on computing machines and cognitive psychologists have been converging is this: that the mind thinks with the aid of representations. There seem to be two different ideas, actually, which are both involved in talk of ‘representations’ today.
The first idea, based on experience with trying to program computers to simulate intelligent behaviour is that thinking involves not just the manipulation of arbitrary objects or symbols, but requires the manipulation of symbols that have a very specific structure, the structure of a formalized language. The experience of computer people was that the most interesting and successful programs in ‘artificial intelligence’ typically turned out to involve giving the computing machine something like a formalized language and a set of rules for manipulating that formalized language (‘reasoning’ in the language, so to speak).
The second idea associated with the term ‘representation’ is that the human mind thinks (in part) by constructing some kind of a ‘model’ of its environment: a ‘model of the world’. This ‘model’ need not, of course, literally resemble the world. It is enough that there should be some kind of systematic relation between items in the representational system and items ‘out there’, so that what is going on ‘out there’ can be read off from its representational system by the mind.
Once a reference definition has been given for a formalized language, a set of sentences in that language can serve as a ‘representational system’ or ‘model of the world’.
Suppose, for example, we wish to represent the fact that the city of Paris is bigger than the city of Vienna. If we have a predicate, say F, which represents the relation bigger than (i.e. if the open sentence which we write in the formal notation as ‘Fxy’ is correlated to the relation which holds between any two things if and only if they are both cities and the first is larger — in, say, population — than the second), and if we have ‘individual constants’ or proper names, say, a and b, which represent the cities of Paris and Vienna (i.e. ‘a’ is correlated to Paris and ‘b’ is correlated to Vienna by the reference definition for the language), then we can represent the fact that Paris, is a bigger city than Vienna by just including in our list of accepted sentences (our ‘theory of the world’) the sentence ‘Fab’. In a similar way, any state of affairs, however complex, that can be expressed using the predicates, proper names, and logical devices of the formal language can be asserted to obtain by including in the ‘theory of the world’ the formula that represents that state of affairs.
When our ‘representational system’ is itself a theory, and when our method of employing our representational system involves making formal deductions, we see that one and the same object — the formalized language, including the rules for deduction — can be the formalized language that computer scientists have been led to postulate as the brain or mind’s (the difference does not appear particularly significant, from this perspective) medium of computation, and, simultaneously, the medium of representation. The mind uses a formalized language (or something significantly like a formalized language) both as medium of computation and medium of representation. This may be called the working hypothesis of cognitive psychology today.
Part of this working hypothesis seems to me certainly correct. I believe that we cannot account at all for the functioning of thought and language without regarding at least some mental items as representations. When I think (correctly) ‘there is a tree in front of me’, the occurrence of the word ‘tree’ in the sentence I speak in my mind is a meaningful occurrence and one of the items in the extension of that occurrence of the word ‘tree’ is the very tree in front of me. Moreover, the open sentence ‘x is in front of me’ is correlated (in the correct semantics for my language) with the relational property of being in front of me, and the entire sentence ‘there is a tree in front of me’ is, by virtue of these and similar facts, one which is true if and only if there is a tree in front of me.
Where there is room for psychologists to differ is over how many mental items are representations, how useful it is to postulate a large and complex unconscious system of representations in order to explain conscious thought and intelligent action, etc.

The Verificationist Semantics of ‘Mentalese’

So far what I have said is in line with the thinking of Pylyshyn and other ‘propositionalist’ cognitive psychologists. For the sake of the argument, we shall assume all this is right. Of course, the actual story may be much more complicated. The mind may employ more than one formalized language (or, rather, formalized-language-analog). Different parts of the brain may compute in different ‘media’. And both sentence-analogs and image-analogs may be used in the actual computational procedures, along with things that are neither. But let us assume the best case for Pylyshyn’s view: a mind which does all its computing in one formalized language.
In what does the mind’s understanding of its own medium of computation consist? It will do no good to say, as Fodor (1975) has, that we should not apply the word ‘understand’ to ‘mentalese’ itself. (‘Mentalese’ is a name for the hypothetical formalized- language-analog in the brain.) For ‘mentalese’ and ‘formalized- language-in-the-brain’ are metaphors. They may be scientifically useful and rich metaphors; but as metaphors they are inseparable from the notion of understanding. Something cannot literally be a language unless it can be understood; and something cannot be a language-analog unless there is a suitable understanding-analog. If some representations in the brain are sentence-analogs and predicate-analogs, then what is the corresponding understanding- analog?
The answer, I suggest, is this: the brain’s ‘understanding’ of its own ‘medium of computation and representation’ consists in its possession of a verificationist semantics for the medium, i.e. of a computable predicate1 which can represent acceptability, or warranted assertibility, or credibility. Idealizing, we treat the language as interpreted (in part) via a set of rules which assign degrees of confirmation (i.e. subjective probabilities) to the sentence-analogs relative to experiential inputs and relative also to other sentence-analogs. Such rules must be computable; and their ‘possession’ by the mind/brain/machine consists in its being ‘wired’ to follow them, or having come to follow them as a result of learning. (I do not assume that mentalese must be innate, or that it must be disjoint from the natural language the speaker has acquired.)
But why a verificationist semantics? Why not a meaning theory in Davidson’s sense?
Obviously, if we interpret mentalese as a ‘system of representation’ we do ascribe extensions to predicate-analogs and truth conditions to sentence-analogs. But the ‘meaning theory’ which represents a particular interpretation of mentalese is not psychology. In fact, if we formulate it as Davidson might, its only primitive notion is ‘true’, and ‘true’ is not psychological notion. To spell this out: the meaning theory yields such theorems (‘T- sentences’) as (pretend that mentalese is English): ‘ “Snow is white” is true in mentalese if and only if snow is white’. This contains no psychological vocabulary at all.
We might try to say ‘well, the understanding consists in the brain’s knowing the T-sentences of the meaning theory’. But the notion of knowing cannot be a primitive notion in sub-personal cognitive psychology.2
Suppose we try to say: the mind understands without using representations what it is for snow to be white, and it knows the representation ‘snow is white’ is true if and only if that state of affairs holds. Not only does this treat the mind as something that ‘knows’ things, instead of analyzing knowing into more elementary and less intentional processes, but it violates the fundamental assumption of cognitive psychology, that understanding what states of affairs are, thinking about them, etc., cannot be done without representations. At bottom, we would be stuck with the myth of comparing representations directly with unconceptualized reality.
On the other hand, if we say, ‘the brain’s/mind’s use of the sentence “Snow is white” (or the corresponding sentence-analog) is such as to warrant the interpretation that “Snow is white” is true in mentalese if and only if snow is white, and this is what it means to say that the brain (implicitly) “knows” the T-sentence’, then we do not give any theory of what that ‘use’ consists in. This is what a verificationist semantics gives (and, as far as I can see, what only a verificationist semantics gives). I suggest, then, that verificationist semantics is the natural semantics for functionalist (or ‘cognitive’) psychology. Such a semantics has a notion of ‘belief’ (or ‘degree of belief’) which is what makes it cognitive; at the same time it is a computable semantics, which is what makes it functionalist.
Of course, we want the semantics to connect with action, and this means that the model must incorporate a utility function as well as a degree of confirmation function. This function, too, must be computable (or, strictly speaking, semi-computable). This idealization is, of course, severe: we are assuming that the belief- analog (represented by the degree-of-confirmation function) and the preference-analog (represented by the utility function) are both fully consistent. The actual (neurologically realized) analogs of both belief and preference (or belief-representations and preference- representations) may well be inconsistent, as long as there are procedures for resolving the inconsistencies when practical decisions have to be made. In a terminology used by Reichenbach in another context, consistency may be ‘de faciendo and not de facto’. What significance this has for philosophy of mind, I shall discuss briefly at the end of this paper.
For now, the problem is this: if the brain’s semantics for its medium of representation is verificationist and not truth-conditional, then what happens to the notion of the ‘content’ of a mental representation?

Two Ruritanian Children

Imagine that there is a country somewhere on earth called Ruritania. In this country let us imagine that there are small differences between the dialects which are spoken in the north and in the south. One of these differences is that the word ‘grug’ means silver in the northern dialect and aluminium in the southern dialect. Imagine two children, Oscar and Elmer, who grow up in Ruritania. They are as alike in genetic constitution and environment as you please, except that Oscar grows up in the south of Ruritania and Elmer grows up in the north of Ruritania. Imagine that in the north of Ruritania, for some reason, pots and pans are normally made of silver, whereas in the south of Ruritania pots and pans are normally made of aluminium. So northern children grow up knowing that pots and pans are normally made of ‘grug’, and southern children grow up knowing that pots and pans are normally made of ‘grug’.
We may suppose that Oscar and Elmer have the same ‘mental representation’ of ‘grug’, that they have the same beliefs in connection with grug, etc. Of course some of these beliefs will differ in meaning even if they are identical in verbal and mental representation. For example, when Oscar believes ‘my mother has grug pots and pans’ and when Elmer believes ‘my mother has grug pots and pans’ the indexical word ‘my’ refers to different persons, and hence the term ‘my mother’ refers to different mothers. But unless such small differences in collateral information are already enough to constitute a difference in the content of the mental representation (in which case it would seem that the ordinary distinction between the meaning of a sign and collateral information that we have in connection with the sign has been wholly abandoned),3 then it would seem that we should say that the content of the mental representation of ‘grug’ is exactly the same for Oscar and for Elmer at this stage in their lives.
I do not mean to suggest that the word ‘grug’ has the same meaning in Oscar’s idiolect as it does in Elmer’s idiolect at this stage; I’ve argued elsewhere (Putnam, 1975a) that the difference in reference in the two communities should be regarded as infecting the speech of the individual speakers. To spell this out: when Oscar tries to determine what is grug he will ultimately have to rely on ‘experts’. These experts need not necessarily be scientists, he may simply ask his parents (who may in turn consult store owners or even scientists). But the point is that since the extension of ‘grug’ is in fact different in the two communities, and since, on the theory of meaning that I have defended in other places, difference in extension constitutes difference of meaning, and since extension is fixed collectively and not individually, it ends up that the meaning of the word ‘grug’ in the idiolects of Oscar and Elmer is not the same even though there is nothing ‘psychological’, nothing ‘in their heads’, which constitutes the difference in meaning. Meanings aren’t in the head. There is a difference in the meaning of the word ‘grug’ in this case; but it is in the reference of the word, as objectively fixed by the practices of the community, and not in the conceptions of grug entertained by Oscar and Elmer.
But the concept of content that Pylyshyn is interested in and that Chomsky4 has expressed an interest in is one that would factor out such objective differences in extension. What Pylyshyn is looking for is a notion of the content of a mental representation in which ‘water’ on earth and ‘water’ on Twin Earth would be said to have the same content for speakers...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Table of Contents
  6. Acknowledgements
  7. Introduction Rainer P. Born and Ilse Born-Lechleitner
  8. 1 Computational Psychology and Interpretation Theory Hilary Putnam
  9. 2 Minds, Brains and Programs John R. Searle
  10. 3 Misrepresenting Human Intelligence Hubert L. Dreyfus
  11. 4 Parameters of Cognitive Efficiency — A New Approach to Measuring Human Intelligence Friedhart Klix
  12. 5 The Decline and Fall of the Mechanist Metaphor S. G. Shanker
  13. 6 A Wittgensteinian View of Artificial Intelligence Otto Neumaier
  14. 7 What is Explained by AI Models? Alfred Kobsa
  15. 8 Split Semantics: Provocations Concerning Theories of Meaning and Philosophy in General Rainer P. Born
  16. Further Reading
  17. Index