Part I
Preliminaries
1
Introduction
Billy lay on a couch. His head throbbed. He had the chills. It hurt to stand. Outside was cold and gray. It started to rain. Billy looked towards his garden. A dog – Mrs. Ruffles, an old golden retriever – sat at the window, searching for his eyes. He made eye contact. Mrs. Ruffles began to whine.
He had forgotten about Mrs. Ruffles. He was keeping her for a friend. She was clearly miserable. Billy knew he should go let her in. A thought occurred to him. Mrs. Ruffles is just a dog. He felt terrible for thinking it. But he also felt ill enough to wonder whether he might just stay on the couch for a few moments more. The misery of Mrs. Ruffles was placed on the balance next to his.
He really ought to go let her in. I’m not trying to suggest otherwise. I just want to focus on his thought – Mrs. Ruffles is just a dog. It’s a common thought. It contains an interesting suggestion. The suggestion is that there is some reason, something associated with the kind of thing Mrs. Ruffles is, and the kind of thing Billy is, for thinking that the misery of Mrs. Ruffles counts for less than Billy’s. Maybe the suggestion is that her misery is somehow not as bad as Billy’s. But what could that mean? Is that a defensible thought?
While we are comparing miseries, compare these two. First, the misery I might experience if I were to visit a Lobster Shack and decline to order the lobster. Second, the misery a lobster might experience if I ordered it. It’s just a lobster. Right? The thing will get boiled alive. How bad is that for a lobster?
These cases occur all the time, to most of us. They are familiar. Some of the questions I want to ask in this book arise from such cases, and relate to a familiar kind of moral reflection regarding the nature of the good life, and the nature of right and wrong action. We lack consensus regarding answers to these questions. But we have spent a lot of time reflecting on them. That’s not nothing. However, some of the questions I want to ask arise from less familiar cases. And some of these less familiar cases highlight practical and ethical questions facing advanced modern societies – questions about which we have spent much less time reflecting. As a result, our moral discourse surrounding such cases is less advanced, and moral consensus, even if possible, is probably further away.
Here is one such kind of case. In the near future, our technical skill at manipulating the genetic code is much advanced. For example, we are able to turn off a pig’s genetic program for growing a kidney, to insert a human pluripotent stem cell into the embryo of a pig, and to bring a pig with a developing human kidney into the world. Moreover, we can do so in a way that generates an easily and safely transplantable kidney – provided we keep the pig in sterile conditions and ‘sacrifice’ it once the kidney has reached the right stage. Another thing we can do is this: we can use human stem cells to alter the developing nervous system of a range of animals. For example, mice are able to incorporate elements of the human nervous system – certain kinds of neurons and glial cells – and these mice demonstrate impressive gains on a range of cognitive tests. There are good scientific reasons for performing this procedure, of course. Doing so allows us to study the progress of developmental processes and of various kinds of infections, and to test certain kinds of psycho-active drugs, in animals that we do not mind killing. The results are highly valuable for understanding what goes wrong in the human nervous system and how we might develop fixes. Of course, the results might be even better if we altered the nervous systems of animals more similar to us – Great Apes, for example. Some scientists argue that, given the benefits, we ought to get over our moral misgivings and experiment on Great Apes. They’re just animals, after all. Others argue that not only should we ban research on apes, but we should also ban it on mice. Some in this camp also argue that we should ban the use of pigs as organ hosts. Still others take an intermediate position: it is wrong to experiment on apes, but not necessarily on mice. And, given the benefits, it is okay to use pigs as hosts for human organs. Of course, a large part of the disagreement in all these cases stems from disagreements about the kind or amount of value present in the mental lives of all these different animals.
Here is a second kind of case. Hedda is a fun-loving mother of three and a devoted wife. While skiing in Italy, Hedda crashes into a tree and sustains a traumatic brain injury. After several days in coma, Hedda begins to show minimal signs of recovery. The doctors are initially pessimistic. The damage is severe. Nonetheless Hedda shows signs of awareness. In particular, she sometimes makes unintelligible sounds when her family is in the room. And she sometimes reacts to music. According to one of her nurses, she enjoys Johnny Cash, especially the older stuff. Hedda is assessed and diagnosed as being in Minimally Conscious State (MCS). This is a diagnosis that indicates a level of functional sophistication above that of the Vegetative State. Even so, the doctors believe there is no chance of full recovery, and little chance of recovery beyond MCS. Hedda will never be able to communicate her wishes regarding her own care, nor will she be able to truly understand her own condition. After an initial period of grief, Hedda’s family comes to believe that she would not want to continue living in this condition. They recall instances before the injury when Hedda seemed to indicate as much. Still, in the absence of clearly expressed prior wishes, the legal issues surrounding Hedda’s case are complex. Her family will likely need to press the case in court if they want artificial nutrition and hydration removed. Although Hedda’s husband was initially happy at the diagnosis of MCS, he comes to see this diagnosis as a burden. The reason is that if Hedda was diagnosed as in Vegetative State, they could probably have artificial nutrition and hydration removed without involving the legal system, and Hedda could have the death her husband judges she would want. Unlike her husband, one of Hedda’s nurses is glad that Hedda was properly diagnosed. He knows that many patients who should be diagnosed as in MCS are misdiagnosed as Persistently Vegetative. And he thinks this is a tragedy – for vegetative patients rarely get a chance to receive proper care. But Hedda’s nurse believes that with proper care, she can have a positive quality of life. She is conscious, after all, he thinks. That’s something we should respect.
I’ll mention one more kind of case here. It is the future. Your granddaughter turns out to be a brilliant engineer. One day she comes over for tea, and begins discussing a difficult case at her lab. Using highly advanced neuromorphic technology, she and her colleagues have developed a range of computer programs that approximate and sometimes far outpace the mental capacities of an adult human. Typically these programs are used in machines that do one thing very well – things like enable a self-driving car to perceive its environmental surroundings, or enable an autonomous weapons system to discriminate between a combatant and a non-combatant. But lately they have been experimenting with ways to put some of these disparate capacities together in a kind of robot. Your granddaughter describes the shocked reaction of many in the lab when one of these robots was going through a series of tests. Apparently after answering a range of questions designed to test its inferential capacities, the robot offered a question of its own. ‘After these tests,’ it said, ‘is it your intention to turn me off?’ After your granddaughter leaves, you pull a dusty book down off the shelf. It is an old philosophy of mind anthology, given to you (as you now recall) by your granddaughter after she took a philosophy course at university. The reason you are thumbing through the anthology is that now you are suddenly gripped by the thought that this robot in your granddaughter’s lab might actually be conscious. If that’s true, you think, then is this thing more than just a robot?
In spite of important differences in detail and in ancillary moral issues, at the heart of these cases are worries about the moral significance of consciousness. In particular, these cases highlight puzzlement about the kind of value that may be present in different kinds of conscious entities, and accordingly about the nature of our reasons to treat these entities in various ways. Let Mrs. Ruffles in? Eat the lobster? Give Hedda intensive medical care or allow her to die? Experiment on mice with partially human brains? Turn the robot off or begin to think of it as a person? My view on questions like these is that it is difficult to answer them without some understanding of the kind or kinds of value associated with the kind or kinds of conscious mentality involved.
Developing such an understanding is my aim in this book. I want to know about a certain kind of value that attaches to consciousness – why it attaches, how much of it might be there (and why), and what kinds of reasons for action might be related to the value within consciousness.
Fair warning: nothing like a moral algorithm, or even moral certainty regarding these cases, is forthcoming. These are difficult cases for a reason. The hope is by the end of the book, we will be able to see more clearly why these cases are so difficult, as well as what we are committing to when we commit to one or another course of action.
The first thing to do is to get as clear as possible regarding the central concepts in play: consciousness, value, and moral status. That is the task of the next three chapters. Note Research for this book was supported by the Wellcome Trust, Award 104347.
Note
Research for this book was supported by the Wellcome Trust, Award 104347.
2
Preliminaries
Consciousness
Consciousness is polysemous. One way to get a sense of this is to read the entry for ‘consciousness’ in the Oxford English Dictionary. Six different definitions are discussed. It is tempting to spend the day mapping the relationships between them – do they all share some core of meaning, or not? – but I won’t do that here. The point is to note that a wide range of legitimate uses of the term ‘consciousness’ will not be directly at issue in this book. For example, sometimes we use ‘consciousness’ to refer to a state of awareness or knowledge of something, whether internal or external: on a long bike ride, I can be conscious of my bodily sensations of elation, the contours of the trail in front of me, a hawk overhead, etc. Sometimes we use ‘consciousness’ with connotations of the self or the person. In this connection, the OED offers an interesting quote from Conder (1877, 91): ‘From our innermost consciousness a voice is heard, clothed with native authority. I feel. I think. I will. I am.’ Sometimes we are more reductive, using ‘consciousness’ to refer simply to the state of being awake. For example, we sometimes describe waking from a deep sleep as regaining consciousness.
The kind of consciousness at issue in this book is not exactly the ones just discussed (although they seem to me to need this kind of consciousness in certain ways). The kind at issue here is what philosophers and psychologists call ‘phenomenal consciousness.’ In the philosophy and science of consciousness we say that phenomenal consciousness is a feature or aspect of mental states, events, and processes. The feature or aspect is that there is something it is like for you to token or undergo these mental states, events, and processes.
That terminology is meant not to elucidate so much as point to phenomenal consciousness. Here is another way to point to it, drawing on some of the ways we use the word ‘consciousness.’ You wake from dreamless sleep, and it seems to you that you have regained consciousness. What did you regain? Speaking for myself, it seems I regain a kind of experiential field – a space populated by all sorts of mental states, events, and processes. In the normal case, this field will contain perceptual bits (visual states, olfactory states, auditory states), bits due to imagination (that song playing in my head), bits due to thought (worries about what I have to do today), bits due to attention (focus directed at a noise I hear, or at a pain in my back), and on and on. This space is a shifting, dynamic thing – it seems to change as the world around me and in me changes. So we sometimes speak of a stream of consciousness, and the way that the stream flows. This space also seems to connect me in a very intimate way with the world. So we sometimes speak of being conscious of various objects and events in the external world.
What it is like for me at some time is many things, then. It is all of that. And that is phenomenal consciousness.
Faced with the diversity present within the experiential field, one might hope the philosopher will have a way of carving up the field in some way – drawing illuminating distinctions, constructing taxonomies of various types of conscious experience, offering a way to get some grip on the architecture of this unwieldy phenomenon. I’ll do my best to do some of this in what follows. For now, however, I wish to keep things as simple as possible. I am examining the thought that phenomenal consciousness is somehow valuable. So I will begin (in Chapter 5) by considering phenomenal consciousness as a whole. As we will see, we will have to move on to more detailed consideration of aspects of consciousness.
3
Preliminaries
Value
A guiding thought for this book is that phenomenal consciousness contains value. What does that mean? Let me begin indirectly, by focusing on the ways that we place value on things.
We place value on a wide range of things – on objects, on events, on states of affairs, on collections of objects or events or states of affairs. We do so in two closely related ways. First, we take up a range of valuing attitudes towards the things in question. Second, we behave towards these things in ways that reflect – and are usually explained by – these valuing attitudes. What valuing attitudes we take up will depend on how we evaluate the thing. Human evaluative practices are complex: just go to any on-line discussion forum regarding science fiction films or professional sports teams. Depending on the thing and on our evaluation of it, we might like it, love it, desire it, approve of it, respect it, stand in awe of it, feel guilt about it, be surprised by it, hate it, fear it, regret it, feel sadness over it, be interested in it, be annoyed or angry or disgusted by it, and more. How best to organize the space of valuing attitudes is an interesting and difficult question.
For example, some of these attitudes apply cleanly to items considered in abstraction from one’s own circumstances. I might like or approve of an action performed by an agent who lived three thousand years ago, even though the action has no influence on me or my circumstances. Other attitudes are more naturally seen as evaluations of an item in relation to one’s own circumstances. When I regret or fear something, it is usually because I stand in a particular, personally relevant relationship to it. Further, some evaluative attitudes obviously reflect positive or negative evaluations (e.g., love, hate), while the valence of other attitudes is not immediately clear. Does awe reflect a positive or negative evaluation, some mix of the two, or neither? It is hard to say (see McShane 2013). Note that there is no good reason to think the things on which we place value must be easily classified as good or bad. We often offer mixed evaluations of things – something can be an item of love, desire, fear, awe, interest, and disgust. Facts about the value of a thing will often resist answers in terms of simple scales and simple dichotomies.
So it turns out that placing value on things is a fairly normal, yet fairly complex, part of human life. We can regiment the complexity somewhat by invoking a distinction between derivative and non-derivative value. It is non-derivative value that is the fundamental notion. As a gloss on it, people often say that a thing has non-derivative value if it has (or bears) value in itself, on its own, or in its own right. By contrast, a thing has derivative value in virtue of some connection it bears to things with non-derivative value. Often the connection is elucidated in terms of a thing’s usefulness. The taste of fried okra is extremely good, and it is plausible to think that whatever your list of items that bear non-derivative value, extremely good taste experiences will be on it. Such experiences have value in their own right. Now you can’t make good fried okra without a decent fryer. So a fryer is a thing with derivative value in the sense that it is useful for getting you to a thing with non-derivative value: the experience of eating fried okra. Short of a pretty good argument to convince us otherwise, it looks like a mistake to place non-derivative value on a good fryer (where good means good at frying, not good in its own right), or to place merely derivative value on the experience of eating fried okra.
I have been talking about the ways that we place value on things. This book is not, however, about how we place value. This book is about the value that things – in particular, conscious experiences – have. To get a feel for that distinction, think about what you think when one of your friends fails to see the value you see in something (a great movie, a great meal, a lovely person, etc.). You think that they are, for some reason, missing what is there. There’s value there, you think, it’s obvious, and your friend misses it.
In this book, then, I’m going to be interested in the non-derivative value present within consciousness. In reflecting on this, I will be trying to account not only for the value that is present in consciousness, but also to understand what makes the items that bear value bear the value that they do. I want to know not only what things within consciousness have non-derivative value: I want to know why they have that value.
We are still in the preliminary phases. But I need to say a little more about non-derivative value. As I understand it, non-derivative value is a general or determinable category containing sub-types. One potential sub-type is intrinsic value. This is value an entity bears in virtue of its intrinsic properties (if you believe in intrinsic properties).1 Another potential sub-type is essential value. This is value an entity bears in virtue of its essential properties – the properties that make it what it is (see Rønn...