Language, Mind and Computation
eBook - ePub

Language, Mind and Computation

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Language, Mind and Computation

About this book

This book explores how and in what ways the relationship between language, mind and computation can be conceived of, given that a number of foundational assumptions about this relationship remain unacknowledged in mainstream linguistic theory, yet continue to be the basis of theoretical developments and empirical advances.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Language, Mind and Computation by P. Mondal in PDF and/or ePUB format, as well as other popular books in Philosophy & Computer Science General. We have over one million books available in our catalogue for you to explore.

Information

1

Introduction

Linguistic theory had its chief preoccupation with form and meaning in most of twentieth-century linguistics grounded in the Saussurean legacy of the pairing of the signifier and the signified. And with it came the division between langue and parole, the former being the abstract (but arbitrary) system of relations between signifiers (or simply, signs) and the signified (meanings), and the latter being the entire constellation of speech events. What Saussure bequeathed to the study of language influenced, whether in a linear or non-linear manner, much of the linguistic theorizing in the decades to come as well as in the subsequent century. A lot of what structuralism had to offer in the twentieth century was already inherited from the Saussurean legacy which posited, among other things, the primacy of langue over parole. But the relations between the signifier and the signified had by then captured the imagination of not just linguists but also philosophers, literary theorists and anthropologists, for the exact nature of relations between the signifier and the signified needed to be thoroughly understood in any sphere that has any purported liaison with language. As it turns out, if the relations between the signifier and the signified are arbitrary, this adds, rather than detracts from, the complexity of such relations between the signifier and the signified which have widely different and thus diverse manifestations in different languages across the world. Even if aspects of the Saussurean legacy have defined much of the progress in the emergence of modern linguistics as a science, it has not been at all easy to flesh out the intricacies involved in the relations between the signifier and the signified. When looked at in a mere formal fashion, it makes sense to the extent that one can treat the relation between the signifier and the signified as a pairing of form and meaning, but a rigorous formalization of the relation between the signifier and the signified has always been elusive since the signifier does not always go in a perfect lockstep with the signified, and worse than that is the fact that the signified is still poorly understood. This does not put an end to all worries. Questions regarding the grounding of the relation between the signifier and the signified have also erupted in that we still wonder where we need to place the relation between the signifier and the signified – whether in the ontological system of language itself, or in the psychological domain or in the Platonic sphere.
All this exegesis is not meant to suggest that the inheritance of the Saussurean legacy has had disastrous consequences for linguistic theory. Not at all. Rather, it has opened up an enormous space of possibilities and wide ramifications that are still discussed, debated in current studies on language, and encompass the very foundations of modern linguistic theory. The two fundamental aspects of the Saussurean legacy – the primacy of langue over parole and the relation between the signifier and the signified have penetrated deep into linguistic theorizing. Whether it was in the structuralist tradition or in the period subsequent to that, these two aspects have configured and broadened the horizons of any serious thinking on aspects of language. However, it is all a story of an overall continuity that has prevailed over and above any breaks or cracks in the system of thinking on language. When structuralism flourished in the twentieth century with a heavy dose of analysis of forms over meaning, the attempt to purge linguistic theory of any association with the intentionality of humans as evoked in the earlier traditions of philology was solidified, though there was certainly some trade-off of ideas from behaviourist psychology as evident in the work of Leonard Bloomfield (1933). Perhaps it was the gradual spread of structuralism all over Europe and the Americas that was responsible for the converging threads of structuralism to consolidate the Saussurean legacy by way of shedding the humanized element in language and building the scientific basis of the linguistic endeavour (Joseph 2002). Again, we encounter another form of continuity that cuts across all traditions of linguistic inquiry right from the Saussurean period but this had a mark of a discontinuity too, as specified just above, in having thrown away the diachronic garb of language study along with the intentionality-laden linguistic methodology. But how is this continuity to be understood rather than explained away? This continuity needs to be deeply understood before we embark upon any exploration of the discontinuities that defined themselves against the stratified layering of patterns of such continuity. As we go on, we will see that it is the mutual tension and undulation between such continuity and discontinuities which constitute breaks, if not from such continuity, but from a palpable contour of continuity telescoped from the earlier tradition of linguistic theorizing, that characterizes the crux of issues surrounding language.
It is with the advent of Generative Grammar in the second half of the twentieth century that another discontinuity, greater in scale and significance, started to appear on the landscape of linguistic theorizing. Noam Chomsky posed questions about the nature of language with respect to how it is acquired, known and instantiated in the human mind. It is these questions that made the advent of Generative Grammar a remarkable break from the earlier continuity derived from the structuralist tradition. Though Chomsky’s Syntactic Structures (1957) was targeted at the behaviourist essence of the structuralist tradition in linguistic theory, his later publications starting with Aspects of the Theory of Syntax (1965) began to erect the architectural foundations of Generative Grammar on rationalist principles. One of the central goals of a linguistic theory is to characterize the faculty of language in its form of growth, knowledge and perhaps use (Chomsky 2004, 2005) undergirded by the distinction between competence and performance, the former being the system of mental rules and representations and the latter denoting aspects of processing. Even if Chomsky (1965, 1980) has over the decades presented the faculty of language as a computational system with its own domain-specific cognitive structures which can be characterized as a system of rules and representations, the nature and form of rules, representations as conceived of within a system of principles of grammars, have then been questioned, reviewed and modified over the successive phases of development of the theory of Generative Grammar from the Government and Binding model to the Minimalist Program (Epstein and Seely 2002, 2006). The recent development in Generative Grammar has been the development of phase theory that partitions the computational space into smaller domains of operations called phases (Chomsky 2001), the goal having been to achieve formal elegance and computational parsimony. Beneath all this lie some of the deep-seated assumptions about the connection between language, mind and computation. We shall have a glimpse of this in the next section.

1.1 At the cross-section of language, mind and computation

It is the connection between language, mind and computation that the present book will try to investigate in a more fundamental and deeper way than has been possible so far. One of the ways of seeing the connection is this: linguistic representations are (internalized) mental representations, and operations on such representations by means of rule systems are computations. On the one hand, this connection between language, mind and computation has constituted or helped consolidate the plexus of assumptions underlying a whole gamut of linguistic theories which covers Generative Semantics (Lakoff 1971; Postal 1972), Head-Driven Phrase Structure Grammar (Pollard and Sag 1994), Lexical Functional Grammar (Bresnan 2001), Cognitive Grammar (Lakoff 1987; Langacker 1987) and so on, all under the broader canopy of the Saussurean legacy in continuity, although Autolexical Syntax (Sadock 1991, 2012) and the Parallel Architecture model of language (Jackendoff 2002) have split the signifier (that is, form) into smaller parts. On the other hand, this connection between language, mind and computation – whatever way it is framed – seems to be the least understood thing ever encountered in linguistic theorizing when juxtaposed with the conception of relations between the signifier and the signified in the system of langue. More significantly, the Generative tradition has made the whole relationship between language, mind and computation far more muddled and confounded, instead of teasing it all out. And this has been carried over into other linguistic theories as well. A part of the problem has something to do with the very nature of what language is, that is, a question about its ontology. Is language really abstract? It is not an easy question. Language may well be abstract, but then why do we feel that language is a concrete thing coming right out of our vocal apparatus? The other part of the problem is that something as nebulous as the human mind and something as spooky as computation have been linked to language. Ultimately, the triumvirate is not an easy combination. An example might give one a glimpse into the monstrosity of the problem. Let us look at the sentence below.
(1) Why does John laugh__?
Here ‘why’ is a Wh-phrase adjunct that is supposed to be interpreted in the gap shown in (1). This is the phenomenon of displacement which is also called movement in the Generative literature. But how does one know that ‘why’ has been displaced (or moved) from the gap shown in (1)? Well, the answer may come from one’s interpretation of ‘why’ as a part of the predicate ‘laugh’. If this is so, let us look at another example.
(2) Why does she wonder__ [how John laughs_]?
Here again ‘why’ seems to be interpreted at a place other than where it appears. But this time ‘why’ is a part of the predicate ‘wonder’, as indicated through the gap right after ‘wonder’ within the matrix clause, and ‘how’ is a part of the predicate ‘laughs’, as enclosed within the square brackets indicating the embedded clause. These two displacements are thus independent. But again the question is: how does one know this? The answer appears to be similar to what has been stated just above. If that is the case, one has to match ‘why’ with the gap at the appropriate place. But how does one know what an appropriate place is? One may well form a syntactic rule that states that ‘why’ must be matched in interpretation with the closest gap (in the local predicate). What is the nature of this rule? On the one hand, this rule cannot be formulated without any reference to the matching of an expression with the interpreted gap. On the other hand, the correspondence of an expression with the interpreted gap can only be possible if they are mentally represented as such. But how come such mental representations form part of a rule if rules are meaning-free operations running on in a mindless machine? To put it another way, how come such mental representations constitute a point to which rules make reference if rules are just operations devoid of meaning? This brings us nearer to the concept of computation, as we can see. But does it then mean that the matching or correspondence of an expression with the interpreted gap is a computation or is what drives a computation? If we say that the matching or correspondence of an expression with the interpreted gap is itself a computation, the process of interpretation involving meaning becomes a part of computation. The other possibility is that the matching or correspondence of an expression with the interpreted gap is not a computation per se; rather, it is what drives a computation. This too makes computation sensitive to the process of interpretation involving meaning. Either way we lose out. How to resolve this? To my knowledge, this issue has not been given the due attention it should have had.
However, this certainly does not complete the picture. There is more to it than meets the eye. Now let us assume that any syntactic rule for cases like (1)–(2) needs to make reference to the matching or correspondence of an expression with the interpreted gap. Now the question is: what do we mean when we say that the gap is interpreted? Are we referring to a process – a psychological process of interpretation? Or are we making a descriptive generalization which is (sort of) frozen as abstracting away from real instances (as in a case where we say the direction in which the sun rises is generally interpreted to be the east)? These questions are not trivial ones. They are intertwined with the issues of how language, mind and computation relate to each other in any linguistic theory that aims to take a stance on the relationship between language, mind and computation. Since these questions have arisen here, let us see how they bear on the issues to be discussed throughout this book.
Now if we pursue the first option – that is, when we say that the gap is interpreted we actually refer to a mental process of interpretation – this is going to put us in a vicious trap. If a syntactic rule is a syntactic rule only by virtue of appealing to a process of interpretation, this squeezes the entire mind into the syntactic rule! This is absurd. Each time a syntactic rule such as the one for cases (1)–(2) is formulated, one has to push mental processes into it by stipulating that this rule makes sense and operates only when all mental processes are part of it. This is awkward as long as one has to formulate valid syntactic rules for cases like (1)–(2). Otherwise, how do we account for linguistic intuitions that deliver linguistic judgements through inferences? If this takes us nowhere, let us explore the other possibility. Now if we say that the gap is interpreted, in fact we mean that it is a descriptive generalization which is sort of frozen from real instances. And that is what we mean when we say, for example, the colour red in traffic signals is generally interpreted to mean ‘stop’. If this is so, a syntactic rule is defined with reference to a frozen or abstracted descriptive generalization. Hence we can understand a syntactic rule to be a syntactic rule only by appealing to an abstracted descriptive generalization. But then how does one know an abstracted descriptive generalization (about the matching or correspondence of an expression with the interpreted gap) to be what it is, if not by having it defined on the strings that syntactic rules generate? What this means is that we can understand an abstracted descriptive generalization about the matching or correspondence of an expression with the interpreted gap only when we make reference to the syntactic rule(s) a string or a set of strings is subject to. But how is this so? This is so because any interpretative generalization can be potentially defined in an arbitrary manner, but in any formal system for (natural) language interpretative generalizations are defined only with respect to the syntactic rule(s) a string or a set of strings is subject to. That is, for cases like (1)–(2) we cannot make any descriptive generalization (about the matching or correspondence of an expression with the interpreted gap) except by an appeal to the syntactic rule that is itself defined on the basis of such descriptive interpretative generalizations. This is circular. If one is still not convinced, here is a way of seeing through this circularity. In example (2) above, there are two predicates (in the matrix and embedded clauses) and ‘why’ can be a part of either of the two predicates as an adjunct. What is it about meaning taken by itself which prevents this? In fact, there is none. There is nothing wrong in having ‘why’ interpreted to mean what it will as an adjunct as part of the predicate ‘laugh’. And this is what we see in (1). However, there is something about English syntax that does not allow ‘why’ to be interpreted through a long-distance dependency in (2). This is exactly the point at which we fall into the trap of circularity. We define syntactic rules on the basis of descriptive interpretative generalizations, and at the same time, we define descriptive interpretative generalizations on the basis of the very syntactic rules. What if we split this sentence and make a disjunction. That is, either we define syntactic rules on the basis of descriptive interpretative generalizations or we define descriptive interpretative generalizations on the basis of those syntactic rules. Is it going to be of any help? Perhaps not. If the requirement goes this way, there will be nothing left of syntactic rules because a sound descriptive interpretative generalization and nothing else will serve our purpose, on the one hand, or descriptive interpretative generalizations will turn out to be vacuous and thus not of much content, on the other. This does not give us any purchase on an understanding either of natural language syntax or of natural language semantics in any conceptually significant and/or empirically substantive sense.
One may also note that there is auxiliary insertion/inversion in (1)–(2), as in (3) below:
(3) Is the man who is a linguist__ home?
Cases like (3) have been marshalled to argue for the structure dependence of linguistic rules, in that the fronted auxiliary ‘is’ is interpreted with reference to the matrix predicate ‘home’ (at the syntactic level), but not to the embedded predicate ‘(is) a linguist’. Now suppose we need to have a rule that differentiates the cases in (1)–(2) from (3), for the sentences in (1)–(2) involve the displacement of a different kind of items (that is, Wh-phrases) which rides on the displacement of the auxiliary that underscores the commonality of all the three sentences. After all, we can have an echo question such as ‘John laughs why?’, but not ‘She wonders why John laughs how?’ How can a computation be sensitive to the differences in the contents of the syntactic rules that differentiate the cases in (1)–(2) from (3)? Another question also arises: what is it about computation that can make out the differences in the contents of the syntactic rules that differentiate the cases in (1)–(2) from (3)? Can the differences be phrased in such a manner as to lead one to say that the syntactic rule for (1)–(2) is not (syntactic) structure-dependent (given the dilemmas in the formulation of the structural generalizations for (1)–(2)), while the one for cases like (3) is? Our linguistic sensibility revolts and we ask: how can this be possible? How can some syntactic rules be structure-dependent and some not? Above all, it is structure dependence that gives us a licence to claim that syntactic operations are systematic operations which count as computations. One may note that the criteria that help make out the differences in the content of the syntactic rules differentiating the cases in (1)–(2) from (3) are second-order generalizations or rules. At which order does computation operate – at the level of first-order syntactic rules or at the level of second-order rules that differentiate one set of first-order syntactic rules from another? We have already run into fiendish problems in connection with first-order syntactic rules. So if computation operates at the level of first-order syntactic rules, one has to buckle under the problem of circularity with respect to the relationship between syntactic rules and interpretation/interpretative generalizations. In that case, computation stagnates in a vacuum, and it does not make any sense for us to say that computations operate on linguistic representations, given that we cannot define, in a non-circular manner, what computations operate on. Nor can we say anything determinate about whether syntactic rules are mentally instantiated or whether computations operate on linguistic representations in the mind or whether such computations are themselves representations or instantiations. So no solid resolution for any of these issues has been arrived at (see, for a discussion, Langendoen and Postal 1984). What if we go for the other option? If we say that computation operates at the level of second-order rules that differentiate a set of first-order syntactic rules from a...

Table of contents

  1. Cover
  2. Title
  3. Copyright
  4. Contents
  5. List of Figures
  6. Preface
  7. Acknowledgements
  8. 1 Introduction
  9. 2 Language and Linguistic Theory
  10. 3 How Language Relates to the Mind
  11. 4 How Language Relates to Computation
  12. 5 Putting it all together: the Relation between Language, Mind and Computation
  13. 6 The Emerging Connection
  14. 7 Linguistic Theory, Explanation and Linguistic Competence
  15. 8 Linguistic Theory, Learnability, Mind and Computation
  16. 9 Conclusion
  17. Notes
  18. References
  19. Index