Â
A friend of yours is the Chairman of the Acme Oil Company. He occasionally calls with a problem and asks your advice. This time the problem is about bidding in an auction. It seems that another oil company has gone into bankruptcy and is forced to sell off some of the land it has acquired for future oil exploration. There is one plot in which Acme is interested. Until recently, it was expected that only three firms would bid for the plot, and Acme intended to bid $10 million. Now they have learned that seven more firms are bidding, bringing the total to ten. The question is, should Acme increase or decrease its bid? What advice do you give?
Â
Did you advise bidding more or less? Most peopleâs intuition in this problem is to bid more. After all, there are additional bidders, and if you donât bid more you wonât get this land. However, there is another important consideration that is often ignored. Suppose that each participant in the auction is willing to bid just a little bit less than the amount he or she thinks the land is worth (leaving some room for profits). Of course, no one knows exactly how much oil is in the ground: some bidders will guess too high, others too low. Suppose, for the sake of argument, that the bidders have accurate estimates on average. Then, who will be the person who wins the auction? The winner will be the person who was the most optimistic about the amount of oil in the ground, and that person may well have bid more than the land is worth. This is the dreaded
winnerâs curse. In an auction with many bidders, the winning bidder is often a loser. A key factor in avoiding
the winnerâs curse is bidding more conservatively when there are more bidders. While this may seem counter-intuitive, it is the rational thing to do.
This book is about economics anomalies, of which the winnerâs curse is an example. An anomaly is a fact or observation that is inconsistent with the theory. Here, the theory of rational bidding advises bidding less when the number of bidders increases, yet most people would end up bidding more. Two ingredients are necessary to produce a convincing anomaly: a theory that makes crisp predictions and facts that contradict those predictions. In the case of economics anomalies, both ingredients can be difficult to obtain. While there is no shortage of economic theories, the theories are often extremely difficult to pin down. If we canât agree on what the theory predicts, then we canât agree on what constitutes an anomaly. In some cases, economists have in fact argued that some theories are simply not testable because they are true by definition. For example, the theory of utility maximization is said to be a tautology. If someone does something, no matter how odd it may seem, it must be utility maximizing, for otherwise the person wouldnât have done it. A theory is indeed not testable if no possible set of data could refute it. (In fact, it is not really a theory, more like a definition.) However, while many economists have taken comfort in the apparent irrefutability of their theories, others have been busy devising clever tests. And in economics the following natural law appears to hold: where there are tests there are anomalies.
What is economic theory? The same basic assumptions about behavior are used in all applications of economic analysis, be it the theory of the firm, financial markets, or consumer choice. The two key assumptions are rationality and self-interest. People are assumed to want to get as much for themselves as possible, and are assumed to be quite clever in figuring out how best to accomplish this aim. Indeed, an economist who spends a year finding a new solution to a nagging problem, such as the optimal way to search for a job when unemployed, is content to assume that the unemployed have already solved this problem and search accordingly. The assumption that everyone else can intuitively solve problems that an economist has to struggle to solve analytically reflects admirable modesty, but it does seem a bit puzzling. Surely another possibility is that people simply get it wrong. The possibility of cognitive error is of obvious importance in light of
what Herbert Simon has called
bounded rationality. Think of the human brain as a personal computer, with a very slow processor and a memory system that is both small and unreliable. I donât know about you, but the PC I carry between my ears has more disk failures than I care to think about.
What about the other tenet of economic theory, self-interest? Just how selfish are people? The trouble with the standard economic model is illustrated by the behavior exhibited by the drivers in Ithaca where I live. There is a creek that runs behind Cornell University. The two-way road that crosses this creek is served by a one-lane bridge. At busy times of the day, there can be several cars waiting to cross the bridge in either direction. What happens? Most of the time, four or five cars will cross the bridge in one direction, then the next car in line will stop and let a few cars go across the bridge in the other direction. This is a traffic plan that would not work in New York City nor in an economic model. In New York City a bridge operating under these rules would, in effect, become one-way, the direction determined by the historical accident of the direction being traveled by the first car to arrive at the bridge!
1 In economic models, people are assumed to be more like New Yorkers than like Ithacans. Is this assumption valid? Fortunately, the cooperative behavior displayed by the Ithaca drivers is not unique. Most of us, even New Yorkers, also donate to charity, clean up camp grounds, and leave tips in restaurantsâeven those we never plan to visit again. Of course, many of us also cheat on our taxes (they will just waste the money anyway), overstate losses when making claims to insurance companies (well, just to recover the deductible), and improve the lie of our balls in golf (winter rules in August, if no one is looking). We are neither pure saints nor sinnersâjust human beings.
Unfortunately, there arenât many human beings populating the world of economic models. For example, the leading economic model of savings behavior, the life-cycle hypothesis, takes no account of the most important human factor entering savings decision makingâself-control. In this model, if you receive a $1000 windfall you are expected to save almost all of it, since you wish to evenly divide the consumption of the windfall over all of the rest of your remaining years of life. Who needs windfalls if you have to spend them like that!
We human beings do other things economists think are weird. Consider this one: You have won two tickets to the Super Bowl, conveniently (for this example) being played in the city where you live. Not only that, but your favorite team is playing. (If you are not a football fan, substitute something else that will get you appropriately excited.) A week before the game, someone approaches you and asks whether you would be willing to sell your tickets. What is the least you would be willing to accept for them? (Assume selling tickets is legal at any price.) Now, instead, suppose you do not have two tickets to the Super Bowl, but you have an opportunity to buy them. What is the most you would be willing to pay? For most people, these two answers differ by at least a factor of 2. A typical answer is to say that I wouldnât sell the tickets for less than $400 each, but I wouldnât pay more than $200. This behavior may seem reasonable to you, but according to economic theory your two answers should be almost identical, so the behavior must be considered an anomaly. This is not to say that there is anything wrong with the theory as a theory or model of rational choice. Rationality
does imply the near equality of buying and selling prices. The problem is in using the same model to
prescribe rational choice and to
describe actual choices. If people are not always rational, then we may need two different models for these two distinct tasks.
Of course, I am hardly the first to criticize economics for making unrealistic assumptions about behavior. What is new here? To understand how the anomalies illustrated here present a new type of critique of economics, it is useful to review what the prior defenses of economic theory have been. The most prominent defense of the rational model was offered by Milton Friedman (1953). Friedman argued that even though people canât make the calculations inherent in the economic model, they act as if they could make the calculations. He uses the analogy of an expert billiards player who doesnât know either physics or geometry, but makes shots as if he could make use of this knowledge. Basically, Friedmanâs position is that it doesnât matter if the assumptions are wrong if the theory still makes good predictions. In light of this argument, this book stresses the actual predictions of the theory. I find that, assumptions aside, the theory is vulnerable just on the quality of the predictions.
A defense in the same spirit as Friedmanâs is to admit that of course people make mistakes, but the mistakes are not a problem
in explaining aggregate behavior as long as they tend to cancel out. Unfortunately, this line of defense is also weak because many of the departures from rational choice that have been observed are systematicâthe errors tend to be in the same direction. If most individuals tend to err in the same direction, then a theory which assumes that they are rational also makes mistakes in predicting their behavior. This point, stressed by my psychologist collaborators Daniel Kahneman and Amos Tversky, makes the new behavioral critique of economics more substantive.
Another line of defense is to say that neither irrationality nor altruism will matter in markets where people have strong incentives to choose optimally. This argument is taken to be particularly strong in financial markets where the costs of transactions are very small. In financial markets, if you are prepared to do something stupid repeatedly, there are many professionals happy to take your money. For this reason, financial markets are thought to be the most âefficientâ of all markets. Because of this presumption that financial markets work best, I have given them special attention in this book. Perhaps surprisingly, financial markets turn out to be brimming with anomalies.
But why a whole book of anomalies? I think there are two reasons to bring these anomalies together. First, it is impossible to evaluate empirical facts in isolation. One anomaly is a mere curiosity, but 13 anomalies suggest a pattern. Thomas Kuhn, a philosopher of science, commented that âdiscovery commences with the awareness of anomaly, i.e., with the recognition that nature has somehow violated the paradigm-induced expectations that govern normal science.â In this book I hope to accomplish that first stepâawareness of anomaly. Perhaps at that point we can start to see the development of the new, improved version of economic theory. The new theory will retain the idea that individuals try to do the best they can, but these individuals will also have the human strengths of kindness and cooperation, together with the limited human abilities to store and process information.
A Monty Python sketch
1 keeps coming back to you. The two characters are a banker (played by John Cleese) and a Mr. Ford (played by Terry Jones), who is collecting money for charity with a tin cup.
BANKER: How do you do. Iâm a merchant banker.
FORD: How do you do Mr. . . .
BANKER: Er . . . I forgot my name for a moment but I am a merchant banker.
FORD: Oh. I wondered whether youâd like to contribute to the orphanâs home. (He rattles the tin.)
BANKER: Well I donât want to show my hand too early, but actually here at Slater Nazi we are quite keen to get into orphans, you know, developing market and all that . . . what sort of sum did have in mind?
FORD: Well . . . er . . . youâre a rich man.
BANKER: Yes, I am. Yes, very, very, very, very, very, very, very, very, very, very, very rich.
FORD: So er, how about a pound?
BANKER: A pound. Yes, I see. Now this loan would be secured by the . . .
FORD: Itâs not a loan, sir.
BANKER: What?
FORD: Itâs not a loan.
BANKER: Ah.
F
ORD: You get one of these, sir.
(He gives him a flag.) BANKER: Itâs a bit small for a share certificate isnât it? Look, I think Iâd better run this over to our legal department. If you could possibly pop back on Friday.
FORD: Well, do you have to do that, couldnât you just give me the pound?
BANKER: Yes, but you see I donât know what it is for.
FORD: Itâs for the orphans.
BANKER: Yes?
FORD: Itâs a gift.
BANKER: A what?
FORD: A gift?
BANKER: Oh a gift!
FORD: Yes.
BANKER: A tax dodge.
FORD: No, no, no, no.
BANKER: No? Well, Iâm awfully sorry I donât understand. Can you explain exactly what you want?
FORD: Well, I want you to give me a pound, and then I go away and give it to the orphans.
BANKER: Yes?
FORD: Well, thatâs it.
BANKER: No, no, no, I donât follow this at all, I mean, I donât want to seem stupid but it looks to me as though Iâm a pound down on the whole deal.
FORD: Well, yes you are.
BANKER: I am! Well, what is my incentive to give you the pound?
FORD: Well, the incentive isâto make the orphans happy.
BANKER: (genuinely puzzled) Happy? . . . Are you quite sure youâve got this right?
FORD: Yes, lots of people give me money.
BANKER: What, just like that?
FORD: Yes.
BANKER: Must be sick. I donât suppose you could give me a list of their names and addresses could you?
F
ORD: No, I just go up to them in the street and ask.
BANKER: Good lord! Thatâs the most exciting new idea Iâve heard in years! Itâs so simple itâs brilliant! Well, if that idea of yours isnât worth a pound Iâd like to know what is. (He takes the tin from Ford.)
FORD: Oh, thank you sir.
BANKER: The only trouble is, you gave me the idea before Iâd given you the pound. And thatâs not good business.
FORD: It isnât?
BANKER: No, Iâm afraid it isnât. So, um, off you go. (He pulls a lever opening a trap door under Fordâs feet and Ford falls through with a yelp.) Nice to do business with you.
Much economic analysisâand virtually all game theoryâstarts with the assumption that people are both rational and selfish. An example is the analysis of the famous prisonerâs dilemma (Rapoport and Chammah, 1965). A prisonerâs dilemma game has the following structure. Two players must each select a strategy simultaneously and secretly. In the traditional story, the two players are prisoners who have jointly committed some crime and are being held separately. If each stays quiet (cooperates) they both are convicted of a minor charge and receive a one-year sentence. If just one confesses and agrees to testify against the other (defects), he goes free while the other receives a ten-year sentence. If both confess, they both receive a five-year sentence. The game is interesting because confessing is a dominating strategyâit pays to confess no matter what the other person does. If one player confesses and the other doesnât, he goes free rather than spend five years in jail. If, on the other hand, the other player also confesses, then confessing means a five-year sentence instead of ten. The assumptions of rationality and self-interest yield the prediction that people playing a game with this structure will defect. People are assumed to be clever enough to figure out that defection is the dominant strategy, and are assumed to care nothing for outcomes to other players; they will, moreover, have no qualms about their failure to do âthe right thing.â
A similar analysis applies to what economists call
public goods. A public good is one which has the following two properties: (1) once it is provided to one person, it is costless to provide to everyone else; (2) it is difficult to prevent someone who doesnât pay for the good from using it. The traditional example of a public good is national defense. Even if you pay no taxes, you are still protected by the Armed Forces. Another example is public radio and television. Even if you do not contribute, you can listen and watch. Again, economic theory predicts that when confronted with public goods, people will âfree ride.â That is, even if they enjoy listening to public radio, they will not make a contribution because there is no (selfish) reason to do so. (For a modern treatment of the theory of public goods, see Bergstrom, Blume, and Varian, 1986.)
The predictions derived from this assumption of rational selfishness are, however, violated in many familiar contexts. Public television in fact successfully raises enough money from viewers to continue to broadcast. The United Way and other charities receive contributions from many if not most citizens. Even when dining at a restaurant away from home in a place never likely to be visited again, most patrons tip the server. And people vote in presidential elections where the chance that a single vote will alter the outcome is vanishingly small. As summarized by Jack Hirshleifer (1985,
p. 55), âthe analytically uncomfortable (though humanly gratifying) fact remains: from the most primitive to the most advanced societies, a higher degree of cooperation takes place than can be explained as a merely pragmatic strategy for egoistic man.â But why?
In this chapter and the next one, the evidence from laboratory experiments is examined to see what has been learned about when and why humans cooperate. This chapter considers the particularly important case of cooperation versus free riding in the context of public good provision.
...