Chapter 1
Microethics
When this common sense of interest is mutually expressed, and is known to both, it produces a suitable resolution and behaviour. And this may properly enough be called a convention or agreement betwixt us, though without the interposition of a promise; since the actions of each of us have a reference to those of the other, and are performed on the supposition that something is to be performed on the other part.
Hume, Treatise, III, ii, p. 2
One does not predict where the other will go, since the other will go where he predicts the first will go, which is wherever the first predicts the second to predict the first to go, and so on ad infinitum. Not âwhat would I do if I were she?â but âwhat would I do if I were she wondering what she would do if she were I wondering what I would do if I were sheâ.
Thomas Schelling 1960, p. 54
Microethics
This chapter shows how cooperation and wariness can do some of the work of psychology. First some examples.
You are driving along a narrow one-lane road, with a ditch on one side and a stone wall on the other. Another car is approaching from the other direction. Both cars slow down slightly, realizing that there is not room for both. There is an easily visible widening in the road, nearer you than the other car. You speed up to get to it, the other car speeds up to pass by you while you are in it, and both pass smoothly by each other.
You are walking towards a closed door, with your arms full of groceries.
Another person is also approaching the door, slightly ahead of you. He accelerates his pace slightly. This generates an expectation in you. He has either seen the problem you face and intends to solve it by opening the door for you, or he sees that you might expect him to open the door and is rushing to get through before the issue arises.
You are playing tennis â doubles. You are near the net while your partner meets the ball. You can see one of your opponents preparing to reply, in a way that you havenât a hope of intercepting from where you are. You move out of the way, on the assumption that your partner is moving into position to meet the ball and return it through where you had been standing.
In all of these examples you form an expectation about what another person will or might do, which is linked to possibilities of cooperation or controperation (to coin an equally vague opposite). The patterns of reasoning that might support the expectation can run in both directions between considerations about cooperation and considerations about the other personâs motivation. In the first case there is an obvious solution to the impending impasse for the two cars, and each acts as if the other has decided to implement it. So there is a potential inference from cooperation to psychology. In the second the assumption that you want the door opened could suggest to the other person either that he should get in a position to help or that he should get in a position to evade the request. So there are potential inferences from psychology to cooperation. In the third you can be understood as reasoning from the solution to the imminent problem to the acts it requires, for yourself and your partner. But you can also be understood as reasoning from your partnerâs apprehension of the problem to her decision to act appropriately (to your own decision to act so as to complement what you expect her to do).
There is no conflict here. We do not have to decide whether psychology is generally inferred from cooperation and controperation or whether the inferences generally run the other way. In most real cases there are many simultaneous thoughts, and many interlocked transitions between them, so that the overall pattern can be very mixed. Typically one derives solutions to problems of cooperation from attributions of states of mind and the other way round, in a complex interdependent pattern. But this fact is itself very significant. It opens up new ways of thinking about our everyday understanding of one another, of what is often called folk psychology. (âFolk psychologyâ is a very loaded term. See warnings and prejudices in the endnote.)1 It suggests that sometimes and in some ways we understand because we can cooperate rather than the other way round.
Let me coin the term microethics for the collection of ways of thinking we have in everyday life for finding our way through frequently occurring situations in which the stakes are low but there are potential conflicts of interest between individuals. (Iâll include also the thinking that smoothes out conflicts and transition stages of a single individualâs history). Children learn a lot of microethics in the first few years of life, at the same time as they are picking up mental concepts. At this point I need make no claims of continuity or discontinuity between microethics and full scale explicitly conceptualized moral thinking. There may well be conflicts between the hardly conscious routines a person uses to minimize conflicts with others in everyday life and some of her most deeply held principles about duty, obligation and the good. And I need not assume that there is a fundamental unity to the different elements of microethics. Some components may have very different origins or nature to others. It is pretty clear that some components are shaped almost completely by inherited adaptations to social life, and some other components are the effects of particular patterns of social life and of particular beliefs and values. (See the introduction and essays in Barkow et al. 1992, and the essays in Whiten and Byrne 1997, especially those by Schmitt and Grammer, and Gigerenzer.) And the interaction between components of these two kinds must be extremely intricate.
Microethics and the attribution of states of mind are intimately connected in everyday life. It is rarely obvious on any given occasion which microethical lemmas are derived from which psychological ones, and vice versa. Often we can see how an attribution of a state of mind could be based on a solution to a microethical problem. For example in the first case above the other driver could be taken as reasoning: a collision would be bad, a deadlock would be bad, the only easy way of avoiding both is for the car approaching me to speed into the passing place, so I expect the driver to intend to achieve that. In this case the psychological state attributed is an intention, but it could as easily have been a desire, a process of reasoning, or a belief.
The derivation of rudimentary psychological attributions from micro ethical premises is the theme of this chapter. I aim to make visible a layer of thinking which can serve many of the functions of attributing beliefs, desires and other states of mind via the solution of problems arising when agents interact. The existence of such a layer should not be surprising, when one considers that small children, very early in their acquisition of adult concepts engage in activities requiring the kind of delicate coordination found in the examples above. And other intelligent social animals face and solve similar problems. (In fact, the patterns of thinking described in this chapter are, I would say, the common property of two-year-old children and adult dogs.) One thing that I must make clear, therefore, is the ways in which microethical thinking mimics or imitates or substitutes for the attribution of states of mind. That is the task of the next section of the chapter. In the rest of the chapter I shall argue for two conclusions. First, that microethics is often a good basis for psychology, in particular that inferences from solutions to problems of cooperation to attributions of states of mind are often sensible ways of reasoning given the limited data, time and thinking power available. (Similarly, the states of mind that people are interested in finding in one another are typically states whose presence makes a difference to the possibility of worthwhile interaction.) In arguing this I shall have to make the ideas of microethics and of a solution to a cooperation problem clearer. And, second, I shall argue that to some extent we have to base our attributions of states of mind in part on microethics. In particular, we have to classify the actions that we explain by attributing states of mind in micro ethical terms. These two conclusions are different. The first conclusion is just about possibility, since these are not the only sensible ways of reasonng about the topics concerned. The second conclusion is about necessity: although the dependence on microethics that it claims is weaker than that described in the first conclusion, it is one to which there are far fewer and less likely alternatives.
Strategic problems
To see the special intimate link between microethics and the attribution of states of mind we must consider what is special about many-person interactions. We must consider strategicality.
Strategic situations are those in which the outcome for each of a number of interacting agents depends on the actions of the others. If all the people in such a situation are deliberately choosing their actions, each personâs decision will have to accommodate its connections with those of others. Notice that I have not characterized strategicality in terms of agentsâ preferences or the reasoning they may follow. I am after something more fundamental. In a strategic situation the actions of the interacting agents are responsive to some factors in the environment, which are themselves affected by the agentsâ actions. Each agent is such that in the absence of other agents they would act so as to bring about some end result or maximize some quantity. But in the presence of the other agents the choice of the end to bring about or the degree to which the quantity is maximized is a more complicated matter. For it is the joint effect of all the agentsâ actions, itself caused by the presence in each agent of processes parallel to those which select actions for each other, that is efficacious. (The wording is meant to prevent the impression that the outcome-yet-to-be teleologically magics the actions into being. See Chapter 4 âStrategic attractorsâ.)
The simplest examples are at one extreme in species evolution and at another in deliberate economic action. In the first, whether a trait developed by one species maximizes reproductive fitness depends on the traits developed by others, and vice versa. And in the second, whether an act brings in enough money to maximize the utility of a particular agent depends on the choices made by others, and vice versa. But most of the strategic situations that interest me fall between these two extremes, being neither the result of random mutations nor explicitly calculated self-conscious choice.
Some terminology. I will refer to the processes that determine a personâs acts as choices, hoping not to give the impression that a choice has to be calculated or self-conscious. If you are walking to the bus stop as the bus comes around the corner, and then just find yourself running, this will be for me a choice. And I shall refer to strategic situations rather than to games, when, as usually, I want to avoid the impression that the interacting people are to be thought of as idealized, explicitly deliberating, or paradigmatically rational. Though I will not be fully consistent in the usage I shall usually use âpersonâ in the sense of âflesh and blood human beingâ and âagentâ in the sense of âidealized actor as construed by some theory of actionâ.
Two constraints on strategic choice are particularly important. The first is that the basic material shaping a personâs decision will not include definite probabilities of each other personâs making one choice rather than another. For the other peopleâs choices are themselves dependent on the as yet unmade choice of the person in question. (If you could know in advance what the others are going to do then you could know what you yourself were going to do, and you wouldnât have to bother deciding.) As a decision progresses each person can come to conclusions about what others are likely to do, just as she comes to conclusions about what she may do. But these are products of, rather than inputs to, the decision. (This is a feature exploited to great effect by Brian Skyrmsâ exploration of decision-dynamics. See Skyrms 1992.)
The other important constraint is that a person cannot simply choose the action with the best outcome. For an action may have a wonderful consequence if others make suitable choices, under circumstances which on reflection show that the others would be very foolish to make those choices. And if they donât, its consequences may be disastrous.
Procedures that choose a personâs actions, whether by explicit calculation or not, will have to produce rather than rely on estimates of the likelihood of acts performed by other agents. And simply going for the best is potentially disastrous. The result is that the idea of a best, or most rational, choice becomes rather problematic. A choice is not better than alternatives when its expectation is greater, that is, when on average it will produce better results. For this average canât be given a sensible value without weighing in the likelihood that the other people concerned will do one or another of the acts open to them. The most that can be said is that it is appropriate for one person to choose her actions by a procedure that is congruent with those of the people she is interacting with, in that the resulting combination of actions will give her better results, on average, than she would have got by choosing actions according to any other method. And if the procedures involved also produce estimates of the likelihood of particular people choosing particular actions, it is appropriate for such a stable combination of action-choosing procedures to be such that the actions chosen do give the best results according to these likelihoods.
This fact will appear in some form on any approach that recognizes the strategic character of many-person interactions. In game theory, the formal theory of strategic choice, it results in a variety of solution concepts. A solution concept is a criterion for satisfactory choice, such that if a personâs choice conforms to the criterion, and so do those of other interacting people, then the result will be reasonably good for that agent. There are many solution concepts. Two well known ones are von Neumann and Morgensternâs classic minimax criterion â choose the action which is least bad whatever the other chooses â and the now orthodox Nash equilibrium â choose an action which is part of a combination in which no one can do better by unilaterally choosing differently. There are much more subtle candidates (an accessible treatment is found in Hargeaves-Heap et al. 1992. For a deep and rigorous treatment see Myerson 1991). No solution concept is intuitively right in all cases. For the sake of exposition I shall work (in this section only, mind you) with the standard idea that the Nash equilibrium is at any rate a necessary condition for a solution to a strategic situation.
Now we can finally say more clearly why making a strategic decision will always be an implicitly psychology-involving process. Consider an example, a pretty simple one, but just complicated enough to bring out the essential factors. We have two agents, call them âCooperative doormatâ and âSelfish pigâ. Each has two actions available to them; call them A and B. There are thus four possible outcomes resulting from the combinations of each agentâs two actions. These can be represented in the standard matrix, as follows below, in which a pair such as (4,0) means that the outcome is satisfactory for A at level 4 of some scale and satisfactory for B at level 0 of some scale. It is very important to understand that the scale need not measure how much A wants the outcome in any intuitively familiar sense. All that is assumed is that the scale measures the weight the outcome has with respect to some behavior choosing process that manages the kind of entanglement typical of strategic choice. (And, in case it is necessary to say this too, only the ordinal quality of the numbers matters, not their cardinal values, and as a result there is no implicit comparison between the degrees to which the outcome is satisfactory for the two agents.)
| | Pig |
| Doormat | A | 11 |
| A | 0,1) | (0.0) |
| B | (0,2) | (4.0) |
There is only one equilibrium of the situation: Doormat chooses A, and Pig chooses A also. If this is what they choose then neither will do better by reconsidering alone. Consider how the two characters could think their way to this equilibrium, if it is by thinking that they tackled the situation. From Pigâs point of view this is a quite simple bit of conditional reasoning. If Doormat chooses A, Pig is better off choosing A rather than B and if Doormat chooses B, Pig is also better off choosing A rather than B, so in either case Pig should choose A. But now consider the situation from Doormatâs point of view. Doormat is best off choosing A if Pig does, and best off choosing B if Pig does. So to know that A is the best choice Doormat has to rehearse Pigâs reasoning...