Rethinking Language, Mind, and Meaning
eBook - ePub

Rethinking Language, Mind, and Meaning

  1. 256 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Rethinking Language, Mind, and Meaning

About this book

In this book, Scott Soames argues that the revolution in the study of language and mind that has taken place since the late nineteenth century must be rethought. The central insight in the reigning tradition is that propositions are representational. To know the meaning of a sentence or the content of a belief requires knowing which things it represents as being which ways, and therefore knowing what the world must be like if it is to conform to how the sentence or belief represents it. These are truth conditions of the sentence or belief. But meanings and representational contents are not truth conditions, and there is more to propositions than representational content. In addition to imposing conditions the world must satisfy if it is to be true, a proposition may also impose conditions on minds that entertain it. The study of mind and language cannot advance further without a conception of propositions that allows them to have contents of both of these sorts. Soames provides it.

He does so by arguing that propositions are repeatable, purely representational cognitive acts or operations that represent the world as being a certain way, while requiring minds that perform them to satisfy certain cognitive conditions. Because they have these two types of content—one facing the world and one facing the mind—pairs of propositions can be representationally identical but cognitively distinct. Using this breakthrough, Soames offers new solutions to several of the most perplexing problems in the philosophy of language and mind.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Rethinking Language, Mind, and Meaning by Scott Soames in PDF and/or ePUB format, as well as other popular books in Philosophy & Linguistics. We have over one million books available in our catalogue for you to explore.
CHAPTER 1
The Need for New Foundations
image
In this book, I will argue that the revolution in the study of language, mind, and meaning led by advances in philosophical logic from Frege through Tarski, Kripke, Montague, and Kaplan must be reconceptualized. Although much progress has been made by adapting intensional logic to the study of natural language, the resulting theoretical framework has limitations that require rethinking much of what has guided us up to now. I will begin by sketching where we are in the study of linguistic meaning and how we got there, after which I will identify three main ways in which I believe the current theoretical framework must change.
The story begins with the development of symbolic logic by Gottlob Frege and Bertrand Russell at the end of the nineteenth and beginning of the twentieth centuries. Initially, their goal was to answer two questions in the philosophy of mathematics: What is the source of mathematical knowledge? and What are numbers? They answered (roughly) that logic is the source of mathematical knowledge, that zero is the set of concepts true of nothing, that one is the set of concepts true something, and only that thing, that two is the set of concepts true of some distinct x and y, and nothing else, and so on. Since the concept being non-self-identical is true of nothing, it is a member of zero; since the concept being the Hempel lecturer in 2013 is true of me and only me, it is a member of the number one; since the concept being my son is true of Greg and Brian Soames, and only them, it is a member of the number two. Other integers follow in train. Since numbers are sets of concepts, the successor of a number n is the set of concepts F such that for some x of which F is true, the concept being an F which is not identical to x is a member of n. Natural numbers are defined as those things that are members of every set that contains zero, and that, whenever it contains something, always contains its successor. Multiplication is defined as repeated addition, while addition is defined as repeated application of the successor function. In this way arithmetic was derived from what Frege and Russell took to be pure logic. When, in similar fashion, classical results of higher mathematics were derived from arithmetic, it was thought that all classical mathematics could be so generated. So, logic was seen the foundation of all mathematical knowledge.
That, at any rate, was the breathtaking dream of Frege and Russell. The reality was more complicated. Their first step was the development of the predicate calculus (of first and higher orders), which combined truth-functional logic, familiar from the Stoics onward, with a powerful new account of generality supplanting the more limited syllogistic logic dating back to Aristotle. The key move was to trade the subject/predicate distinction of syllogistic logic for an expanded version of the function/argument distinction from mathematics. Applied to quantification, this meant treating the claim that something is F as predicating being true of something of the property being F or, in Russell’s convenient formulation, of the function that maps an object onto the proposition that it is F, while treating the claim that everything is F as predicating being true of each object of that property or function. The crucial point, resulting in a vast increase in expressive power, is the analysis of all and some as expressing higher-order properties of properties or propositional functions expressed by formulas of arbitrary complexity.1
Although the first-order fragment of Frege’s system was sound and complete—in the sense of proving all and only genuine logical truths—the concepts needed to define and prove this (while also proving that the higher-order system was sound but, like all such systems, incomplete) were still fifty years away. In itself, this didn’t defeat the reduction of mathematics to logic. More serious was the intertwining of this early stage of modern logic with what we now call “naïve set theory”—according to which for every stateable condition on objects there is a set (perhaps empty, perhaps not) of all and only the things satisfying it. To think of this as a principle of logic is to think that talk of something’s being so-and-so is interchangeable with talk of its being in the set of so-and-so’s.
When Russell’s paradox demonstrated the contradiction at the heart of this system, it quickly became clear that the principles required to generate sets without falling into contradiction are less obvious, and open to greater doubt, than the arithmetical principles that Frege and Russell hoped to derive from them. This undercut the initial epistemological motivation for reducing mathematics to logic. Partly for this reason, the subsequent boundary that grew up between logic and set theory was one in which the latter came to be viewed as itself an elementary mathematical theory, rather than a part of logic. Reductions of mathematical theories to set theory could still be done, with illuminating results for the foundations of mathematics, but the philosophical payoff was not what Frege and Russell initially hoped for.2
This philosophical shortcoming was compensated by the birth of new deductive disciplines—proof theory and model theory—to study the powerful new logical systems that had been developed. A modern system of logic consists of a formally defined language, plus a proof procedure, often in the form of a set of axioms and rules of inference. A proof is a finite sequence of lines each of which is an axiom or a formula obtainable from earlier lines by inference rules. Whether or not something counts as a proof is decidable merely by inspecting the formula on each line, and determining whether it is an axiom, and, if it isn’t, whether it bears the structural relation to earlier lines required by the rules. Since these are trivially decidable questions, it can always be decided whether something counts as a proof, thus forestalling the need to prove that something is a proof. In a purely logical (first-order) system, the aim is to prove all and only the logical truths, and to be able to derive from any statement all and only its logical consequences.
These notions are defined semantically. To think of them in this way is to think of them as having something to do with meaning. Although this wasn’t exactly how the founder of model theory, Alfred Tarski, initially conceived them, it is how his work was interpreted by Rudolf Carnap and many who followed. The key idea is that we can study the meaning of sentences by studying what would make them true. This is done by constructing abstract models of the world and checking to see which sentences are true in which models. When a sentence is true in all models it is a logical truth; when the truth of one sentence in a model always guarantees the truth of another, the second is a logical consequence of the first; when two sentences are always true together or false together they are logically equivalent, which is the logician’s approximation of sameness of meaning.
By the mid-1930s, the model and proof theories of the first-and second-order predicate calculi were well understood and inspiring new projects. One was modal logic, which introduced an operator it is logically/analytically/necessarily true that—the prefixing of which to a standard logical truth produces a truth. Apart from confusion about what logical, semantic, or metaphysical notion was to be captured, the technical ideas soon emerged. Since the new operators are defined in terms of truth at model-like elements, logical models for modal languages had to contain such elements, now dubbed possible world-states, thought of as ways the world could have been. This development strengthened the Fregean idea that for a (declarative) sentence S to be meaningful is for S to represent the world as being a certain way, which is to impose conditions the world must satisfy if S is to be true.
Hence, it was thought, meaning could be studied by using the syntactic structure of sentences plus the representational contents of their parts to specify their truth conditions. With the advent of modality, these conditions were for the first time strong enough to approximate the meanings of sentences. To learn what the world would have to be like to conform to how a sentence (of a certain sort) represents it is to learn something approximating its meaning. The significance of this advance for the study of language can hardly be overstated. Having reached this stage, we had both a putative answer to the question What is the meaning of a sentence? and a systematic way of studying it.
This is roughly where the philosophically inspired study of linguistically encoded information stood in 1960. Since then, philosophers, philosophical logicians, and theoretical linguists have expanded the framework to cover large fragments of human languages. Their research program starts with the predicate calculi and is enriched piece by piece, as more natural-language constructions are added. Modal operators include it is necessarily the case that, it could have been the case that, and the counterfactual operator if it had been the case that ______, then it would have been the case that ______. Operators involving time and tense can be treated along similar lines. Generalized quantifiers have been added, as have adverbs of quantification, and propositional attitude verbs such as believe, expect, and know. We also have accounts of adverbial modifiers, comparatives, intensional transitives, indexicals, and demonstratives. At each stage, a language fragment for which we already have a truth-theoretic semantics is expanded to include more features found in natural language. As the research program advances, the fragments of which we have a good truth-theoretic grasp become more powerful and more fully natural language–like. Although there are legitimate doubts about whether all aspects of natural language can be squeezed into one or another version of this representational paradigm, the prospects of extending the results so far achieved justify optimism about eventually arriving at a time when vastly enriched descendants of the original systems of Frege and Russell approach the expressive power of natural language, allowing us to understand the most basic productive principles by which information is linguistically encoded.
This, in a nutshell, is the dominant semantic conception in theoretical linguistics today. If all that remained were to fill in gaps and flesh out empirical details, philosophers would have done most of what was needed to transform their initial philosophical questions about mathematics into scientific questions about language. However, we haven’t yet reached that point. While the dominant conception has made progress in using truth conditions to model representational contents of sentences, it has not paid enough attention to the demands that using and understanding language place on agents. Given the logical, mathematical, and philosophical origins of the enterprise, it could hardly have been otherwise. When what was at stake was, primarily, the investigation of the logical, analytic, or necessary consequences of mathematical and scientific statements, there was no theoretically significant gap to be considered between what a sentence means and the claim it is used to make, and hence no need either to investigate how speaker-hearers might fill such gaps or to study what understanding and using a language consist in, and no need to individuate thoughts or meanings beyond necessary equivalence.
There have, to be sure, been important attempts to address these issues as the dominant semantic model has extended its reach beyond the formal languages of logic, mathematics, and science. We need, for example, to look no further than David Kaplan’s logic of demonstratives, to find a way of accommodating the idea that what a sentence means and what it is standardly used to say are—though systematically related—not always the same. What we don’t find in Kaplan, or in the dominant approach generally, is any retreat from the idea that advances in the understanding the semantics of natural language are closely and inextricably tied to advances in extending the reach of the methods of formal logic and model theory. This, I believe, must change if we are to reach our goal of founding a truly scientific study of language and information.
In this book, I will outline three steps in that direction. First, I will use examples involving several linguistic constructions to argue that we must stop oversimplifying the relationship between the information semantically encoded by (a use of) a sentence (in a context), on the one hand, and the assertions it is there used to make, the beliefs it is there used to express, and the information there conveyed by an utterance of it, on the other. It has often been assumed that the semantic content of a sentence is identical, or nearly so, with what one who accepts it thereby believes, and with what one who utters it thereby asserts. This is far too simple; there is a significant gap between the semantic contents of sentences and the information contents of their uses.
Second, I will argue that we need to pay more attention to what understanding a linguistic expression E requires—beyond, or other than, knowing of the representational content of E that it is the content of E. It is often assumed that since meaning is semantically encoded information, and since the information encoded by a nonindexical sentence S is the proposition p it expresses, understanding S is knowing of S that it encodes p. I will argue that this is not so. Semantic knowledge of this simple representational sort is insufficient for understanding because, as I will illustrate in chapter 4, to understand a word, phrase, or sentence is to be able to use it in expected ways in communicative interactions with members of one’s linguistic community, which involves graded recognitional and inferential ability that often goes well beyond a cognitive grasp of content.3 The semantic knowledge in question is also unnecessary for understanding a sentence because, as I will argue in chapters 2 and 4, to understand S is to be disposed to use S to entertain p—which, contrary to what is often assumed, doesn’t require being disposed to make p the object of one’s thought, or to predicate any relation holding between S and p. Once we have a proper understanding of what propositions really are, it will be easy to see that to entertain one is not to have any thought about or cognition of it at all, but to perform the cognitive operations in terms of...

Table of contents

  1. Cover Page
  2. Title Page
  3. Copyright Page
  4. Dedication Page
  5. Contents
  6. Acknowledgments
  7. Chapter 1: The Need for New Foundations
  8. Chapter 2: The Metaphysics and Epistemology of Information
  9. Chapter 3: Thinking of Oneself, the Present Moment, and the Actual World-State
  10. Chapter 4: Linguistic Cognition, Understanding, and Millian Modes of Presentation
  11. Chapter 5: Perceptual and Demonstrative Modes of Presentation
  12. Chapter 6: Recognition of Recurrence
  13. Chapter 7: Believing, Asserting, and Communicating Propositions of Limited Accessibility
  14. Chapter 8: Recognition of Recurrence Revisited
  15. Chapter 9: Situating Cognitive Propositions in a Broader Context
  16. Chapter 10: Overcoming Objections
  17. Chapter 11: Worries, Opportunities, and Unsolved Problems
  18. References
  19. Index