Keeping the World in Mind
eBook - ePub

Keeping the World in Mind

Mental Representations and the Sciences of the Mind

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Keeping the World in Mind

Mental Representations and the Sciences of the Mind

About this book

Drawing on a wide range of resources, including the history of philosophy, her role as director of a cognitive neuroscience group, and her Wittgensteinian training at Oxford, Jacobson provides fresh views on representation, concepts, perception, action, emotion and belief.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Keeping the World in Mind by Kenneth A. Loparo,Anne Jaap Jacobson in PDF and/or ePUB format, as well as other popular books in Philosophy & Mind & Body in Philosophy. We have over one million books available in our catalogue for you to explore.
1
Regarding Representations
1 Introduction
What makes these different theories – from Aristotle, Aquinas, Hume and current cognitive neuroscience – all theories about the mind or brain sampling the world? Hobbes certainly did not see himself as updating Aquinas and Aristotle in such a direct fashion, but he might have. This is because we can find a sense to saying that the patterns in the two domains – world and brain – are the same. And the sameness here is mathematico-empirical inter-derivability (Dayan & Abbott, 2001). That is, there is a description of the environmental cause from which, given the appropriate empirical algorithms, a description of the effect can be derived and vice versa. At the core of sampling theories is a notion of instantiating the same things, forms, qualities or patterns of activity.
The transition from matter and form to such inter-derivability is very significant because it points to the various theoretical commitments theoreticians have taken on in order to solve the problem that, in a perfectly ordinary sense, the mind does not get samples of the world in it. As we previously noted, we need a compensating ontology. That which is found in cognitive neuroscience is exactly what one should expect from the kind of mechanical science that we have. Forms have become, we might say, empirically equivalent patterns.
As we see at many different points in these initial chapters, we have two kinds of access to the two very different kinds of representation. One the one hand, we have the familiar and quite ordinary distinction between displaying and describing. I can inform you of my tastes in wines by showing you examples or by describing wines I like. On the other hand, more apparatus needs to be introduced if these ideas are to feature in philosophy of mind. Recently many philosophers have found it quite acceptable to think that the mind or brain somehow has some sort of semantic properties, perhaps even a code or language with meaning. In contrast, just about no one would think that the mind literally gets samples of many of the things in its environment. There are exceptions, to be sure, when the environmental entity is itself on the mental side of things, such as with another’s emotions. But in general what is needed is a fairly full ontology that can find a respect in which mind states and environmental states can be said to be the same, whether that is possessing the same form, instantiating the same qualities, as Hume has it, or possessing empirically equivalent patterns.
2 Representations in cognitive neuroscience
There are a number of ways in which one might get to the conclusion that there are important current scientific theories in which the brain counts as cognitively relating to its environment by realizing some of the environment in it, or getting a sample of it. One way is through doing a literature search. In 1999, I decided to use a search engine to go through academic articles that employed ‘mental representation,’ or just ‘representation’ in a context that contained ‘mind’ or ‘brain.’ The results of the search were very exciting. Whereas philosophers standardly agree that representations have content and aboutness, a number of other disciplines appeared to attach a very different understanding to their use of ‘representation.’ Their use was instead at least very close to the Aristotelian-Thomistic idea that what is sensed and known in the world outside our heads gets realized inside our heads. And the fields with this usage include cognitive psychology and the newly emerging cognitive neuroscience. As I saw when I went to a talk around that time, the experimenter who referred to a pattern of excitation in a monkey’s brain as ‘the movement of the banana in the monkey’s brain’ might possibly mean exactly what he was saying.
The results of that search, and more recent additions to it, have been discussed in detail in two previous articles (Jacobson, 2003, 2008) and employed in others (Jacobson, 2005, 2007, 2009). We look at some of the examples uncovered at various points in the following discussions. This is true particularly for Chapters 2 and 3.
One task is to display an alignment between a notion of sampling as instantiating with ordinary uses of represent or representation and neuroscientific uses of those words. Let us notice in advance that ‘represent’ in the sample sense may cover two kinds of cases: instantiating and co-instantiating. Thus neural activity, as we see, may be said to represent pain because it (partially) instantiates it. Similarly, something that is one’s best work might be said to represent one’s best work. Co-instantiating, which we might think of as more appropriately sampling, can occur when the neural activity in one person’s brain matches that of another’s. Thus, one person can represent another person’s emotions by co-instantiating the neural activity. In more ordinary cases, one might represent how someone walks by instantiating at least some of the gait’s properties.
The central focus of this work is the role of representations in cognitive neuroscience. We may feel that some factors discussed fall outside the purview of neuroscience, as some philosophers think. If that is so, we can understand them to fall outside our discussion, which is about understanding and using neuroscience without committing ourselves to truth-evaluable contents in the brain or realized in the brain.
In the next section, I argue that we have found the basis for a better account of the mind’s cognitive relation to its environment than that of description theories. There is a large and unfortunate lacuna in the description theories of cognition, and theorists of cognition working with the idea of displaying, sampling or coinstantiation all have attempted to fill the gap the language-influenced theories have left.
3 What is wrong with philosophy’s current representations?
In March 2011, during the memorial conference for Philippa Foot that occurred at Somerville College, a well-known philosopher was disputing Foot’s position on the law of double effect. In doing so, he employed the idea of intentions as standard philosophical mental representations, with the last one in a chain guiding the action. He agreed with the criticism that Foot would never have employed such an idea, but remarked that ‘mental representation’ was proving very useful in philosophy. Such a response is unlikely to move anyone with any Wittgensteinian training. Technical terms are not forbidden, but they are not to be used to cover holes in a theory. One has to be able to give an account of how they solve the problems regarding which they were invoked.
In these terms, ‘mental representation’ understood in terms of semantic properties may well be problematic. The idea of an inner representation that guides our action may answer to our sense of agency when we act, but we do not have a theory that explains satisfactorily how the guiding occurs. How does inner content cause things like muscle contractions or indeed anything involved in acting? There is a hole in the theory.
Few theorists have paid much attention to the hole. On notable exception is Ramsey (2003):
In recent years, work on the nature of mental representation has become a major industry in the philosophy of mind. Almost all of this work has focused on a family of issues clustered around the core theme of explaining in naturalistic terms, how mental states come to have intentional content. ... So while philosophers have hotly debated whether a frog’s brain represents flys [sic] or small black dots, or how we should understand the content of a Twin-Earthling’s beliefs, more fundamental questions regarding what, in naturalistic terms, a mental representation does, or exactly what it is about a structure that makes it a representation, have received very little attention philosophy of mind. (126).
There are two important and related reasons for thinking that we are not going to get a satisfactory account of how content causes action; the hole may not get filled. I discuss the first problem here; the second is one discussed in Chapters 6 and 7, and it has to do with the fact that, contrary to leading theories of content, there is no good reason to think that the acquisition of concepts requires one’s encounter with an instance of the concept. The world is full of fakes, and they can serve teaching purposes quite well. A very scary artificial owl might do a mouse quite well as it learns that there are dangerous birds. Alternatively, baby birds can learn about the benefits a mother brings, and so acquire whatever sort of mother representation they need, by receiving food from a puppet glove.
To see the first issue, we need to distinguish between two kinds of content, and we do it by appealing to some theorists prominent in the field. Like a number of recent philosophers, Prinz holds that content has two components (Prinz, 2002). One concerns ‘intentional content’ (reference, denotation or extension), whereas the other is conceptual content (descriptive content). Thus, the intentional content of ‘elephant’ is the class of elephants, whereas the list (large mammal, trunk, grey, trumpeting sound) may capture the conceptual content.
For Prinz an account of content must give us an understanding of both intentional and conceptual content. For Fodor (2003, 2008), on the other hand, content is intentional content. Machery (2009) thinks that an account of intentional content responds to the philosophers’ desire to understand what makes something have the truth or satisfaction value that it does, whereas conceptual content falls under the purview of psychologists because it is involved in explaining how we use concepts.
In the following discussion, we concentrate on intentional content and accounts of that content. A number of theorists have argued that an account of intentional content cannot be enough because it does not address the uses of concepts. These objections may come from philosophers advocating theories of concepts incorporating use. These include those stressing conceptual role; see Edwards (2009) for a useful summary of these views. Other objections come from theorists who are more interested in the psychology of concept employment, such as Machery (2009) and Murphy (2002). We encounter these views again as we discuss concepts in Chapter 6. These rejections of theories positing intentional content alone are not be employed here. Rather, our objection is that intentional content does not have a role in current neuroscience.
Before we begin the argument, let us note that the conclusion does not mean we cannot get a neuroscientific account of the many times that one’s beliefs do appear to cause one to react. Conceptual content may still have a role to play. Further, as we see in Chapters 6 and 7, Barsalou’s account of concepts makes them Aristotelian representations, and so very congenial for a theory that thinks intentional concept is causally inert.
Our objection to all accounts of intentional content or reference is that many of the factors that make it the case that something has content have nothing to do with neuroscientific explanations as they are developing. Increasingly, the computations such science posits take neural values for their variables. These representations are neural representations that are characterized in neural terms (Sharpee, Atencio & Schreiner, 2011); mind states are formalized in terms of the vectors of neurons (Fekete & Edelman, 2011). What this means is that we are advancing toward a clear picture of neural processes as making available adequate explanations for many of the items we also discuss in ordinary terms.
Accordingly, an adequate neuroscientific description of the neural causes does not have room for the factors that show up in philosophical accounts of intentional content, such as evolutionary history, or the simpler picture of the history of concept learning. Consider, for example, the Prinz (2002) amendment of Dretske’s account:
... the real content of a concept is the class of things to which the object(s) that caused the original creation of that concept belong. Like Dretske’s account, this one appeals to learning, but what matters here is the actual causal history of a concept. Content is identified with those things that actually caused incipient tokenings of a concept (what I will call the ‘incipient causes’), not what would have caused them (249).
Neither the things that originally caused the tokening nor the causal history of the concept are relevant to the neural-causal explanation. Similarly, if we want to trace neutrally the path from the frog’s sighting of the speck to its zapping the speck with its tongue, conjectures or even known facts about evolutionary history are beside the point. Adding in these factors to such a picture is adding in a causally redundant or superfluous element.
Let us be clear what is at question here. It is not about whether the past can provide us with causes of the present; the chemicals ingested by a potential father may well explain facts about his adult children. Indeed, all sorts of past factors can feature in causal explanations. If we ask why someone died of a brain aneurism, one factor to mention might be that she did not tell her doctor of the early warning signs. The point remains that the victim’s not alerting her doctor is not part of the internal mechanism that results in death when one has an aneurism, whereas the substance ingested by a father might well explain some abnormalities. Asking what parents ate may be causally relevant to present neural understanding, even thirty years later. In contrast, in the later case, if we are asking what went on in her head that led to her death, ‘She failed to tell her doctor’ is not one of them. Brain science may find it needs to consider what unusual chemicals were available to a system. Brain science does not include the content of phone calls of the sort we are discussing.
Let me stress this with a familiar example. Suppose we assign the content ‘fly’ to a frog’s perception because of facts about how frogs evolved to survive in the past. Those facts about other frogs in some distant past may explain why we use the term ‘fly.’ It may even be said to be a part of what explains why the frog has the instincts it does. It does not describe current causes in the frog’s neural system that lead to the tongue zapping. Among the parameters we need for that story are not the facts about past frogs’ survival. When we bring in content, we bring in facts that are not among the parameters of neuroscience.
There is a more general point underlying this argument. Positing mental representations of the kind standard in philosophy does not come for free. We need two things from such positing. We need an account of how we have realized in the mind something with truth or satisfaction values, and we need an account of how that works in the causal machinery, given it is posited as causes. The problem here is that the answer to the first goes nowhere toward answering the second. We investigate this in more detail shortly as we look at the attribution of imperative content to inner neural processes to explain how they cause actions, in our discussion of Shea’s work in the following paragraphs.
To say this is not to endorse an eliminativist picture, because our ordinary vocabulary carries very important concepts of value, among other things. That suggests we should find a place for more than one sort of discourse about the human mind. We address this point briefly in the following discussion.
The argument about content I have just given shares a conclusion similar to that of the recent ‘exclusion argument’ Shoemaker (2007) and Kim (2007, 2005). Applied to the case of Fodorian representations, that is, any representations with intentional content, the argument points out that they are realized in neural states, and that it is these physical realizers that do the causal work. Nonetheless, my argument does not endorse the premises of Kim’s argument that lead to a problem of causal exclusion. Rather, the point is about causal irrelevancy.
The core of my argument is closer to a recent point made by Keaton (2012). Keaton importantly shows that a functionalist account of what constitutes pain is not an account of the background causal conditions that enable a pain to cause, for example, wincing. A functionalist account may give an adequate account of the truth-conditions of ‘This is pain,’ but in doing so, he brings in facts not relevant to a particular causal interaction. In our terms, we need to distinguish between the conditions under which content is said to be possessed by, or realized by, the events and the why or how some neural events has their effects. What constitutes content is not necessarily causally relevant to the neural theory.
What this means is that in a maturing cognitive neuroscience, intentional content does not have a causal-explanatory role. We may get a fuller understanding of the issue being raised by looking at a case where we have a fairly good idea of the neural mechanisms involved. Doing so will take us to the extensively investigated reinforcement learning. We first approach reinforcement learning in largely non-neural terms, which is how it is often addressed.
Reinforcement Learning: The very basic idea is that in reinforcement learning one has neural reactions that signal a reward obtainable through some action; we call this the initial expectation signal the IES. One then gets the reward from the action, which gives the consequent reward signal, or the CRS. There are three possible comparisons in quantity between the IES and the CRS. Either one got better than expected, worse than expected or there was a match. If there is a match, then there is no learning, but in the other two cases, one does learn and the next IES should be modified. Particularly important in this model is the RPE, or the Reward Prediction Error, which is a comparison of the IES and the CRS. It might be ‘not as good,’ ‘better than expected’ ...

Table of contents

  1. Cover
  2. Title Page
  3. Introduction
  4. 1   Regarding Representations
  5. 2   From Fodorian to Aristotelian Representations
  6. 3   Aristotelian Representations II
  7. 4   Hume
  8. 5   Ideas, Language and Skepticism
  9. 6   Concepts
  10. 7   Thought
  11. 8   Vision
  12. 9   Actions, Emotions and Beliefs, Part I
  13. 10   Actions, Emotions and Beliefs, Part II
  14. Conclusion
  15. References
  16. Index