I. PREVIOUS TREATMENTS OF PERCEPTION IN INTERNATIONAL RELATIONS
Although diplomatic historians have discussed misperception in their treatments of specific events, students of international relations have generally ignored this topic. However, two sets of scholars have applied content analysis to the documents that flowed within and between governments in the six weeks preceding World War I. But the data have been put into quantitative form in a way that does not produce accurate measures of perceptions and intentions and that makes it impossible to gather useful evidence on misperception.1
The second group of theorists who have explicitly dealt with general questions of misperception in international relations consists of those, like Charles Osgood, Amitai Etzioni, and, to a lesser extent, Kenneth Boulding and J. David Singer, who have analyzed the cold war in terms of a spiral of misperception.2 This approach grows partly out of the mathematical theories of L. F. Richardson3 and partly out of findings of social and cognitive psychology, many of which will be discussed in this research note.
These authors state their case in general, if not universal, terms, but do not provide many historical cases that are satisfactorily explained by their theories. Furthermore, they do not deal with any of the numerous instances that contradict their notion of the self-defeating aspects of the use of power. They ignore the fact that states are not individuals and that the findings of psychology can be applied to organizations only with great care. Most important, their theoretical analysis is for the most part of reduced value because it seems largely to be a product of their assumption that the Soviet Union is a basically status-quo power whose apparently aggressive behavior is a product of fear of the West. Yet they supply little or no evidence to support this view. Indeed, the explanation for the differences of opinion between the spiral theorists and the proponents of deterrence lies not in differing general views of international relations, differing values and morality,4 or differing methods of analysis,5 but in differing perceptions of Soviet intentions.
II. THEORIESâNECESSARY AND DANGEROUS
Despite the limitations of their approach, these writers have touched on a vital problem that has not been given systematic treatment by theorists of international relations. The evidence from both psychology and history overwhelmingly supports the view (which may be labeled Hypothesis i) that decision-makers tend to fit incoming information into their existing theories and images. Indeed, their theories and images play a large part in determining what they notice. In other words, actors tend to perceive what they expect. Furthermore (Hypothesis ia), a theory will have greater impact on an actorâs interpretation of data (a) the greater the ambiguity of the data and (b) the higher the degree of confidence with which the actor holds the theory.6
For many purposes we can use the concept of differing levels of perceptual thresholds to deal with the fact that it takes more, and more unambiguous, information for an actor to recognize an unexpected phenomenon than an expected one. An experiment by Bruner and Postman determined âthat the recognition threshold for ⌠incongruous playing cards (those with suits and color reversed) is significantly higher than the threshold for normal cards.â7 Not only are people able to identify normal (and therefore expected) cards more quickly and easily than incongruous (and therefore unexpected) ones, but also they may at first take incongruous cards for normal ones.
However, we should not assume, as the spiral theorists often do, that it is necessarily irrational for actors to adjust incoming information to fit more closely their existing beliefs and images. (âIrrationalâ here describes acting under pressures that the actor would not admit as legitimate if he were conscious of them.) Abelson and Rosenberg label as âpsycho-logicâ the pressure to create a âbalancedâ cognitive structureâi.e., one in which âall relations among âgood elementsâ [in oneâs attitude structure] are positive (or null), all relations among âbad elementsâ are positive (or null), and all relations between good and bad elements are negative (or null).â They correctly show that the âreasoning [this involves] would mortify a logician.â8 But those who have tried to apply this and similar cognitive theories to international relations have usually overlooked the fact that in many cases there are important logical links between the elements and the processes they describe which cannot be called âpsycho-logic.â (I am here using the term âlogicalâ not in the narrow sense of drawing only those conclusions that follow necessarily from the premises, but rather in the sense of conforming to generally agreed-upon rules for the treating of evidence.) For example, Osgood claims that psycho-logic is displayed when the Soviets praise a man or a proposal and people in the West react by distrusting the object of this praise.9 But if a person believes that the Russians are aggressive, it is logical for him to be suspicious of their moves. When we say that a decision-maker âdislikesâ another state this usually means that he believes that that other state has policies conflicting with those of his nation. Reasoning and experience indicate to the decision-maker that the âdislikedâ state is apt to harm his stateâs interests. Thus in these cases there is no need to invoke âpsychologic,â and it cannot be claimed that the cases demonstrate the substitution of âemotional consistency for rational consistency.â10
The question of the relations among particular beliefs and cognitions can often be seen as part of the general topic of the relation of incoming bits of information to the receiversâ already established images. The need to fit data into a wider framework of beliefs, even if doing so does not seem to do justice to individual facts, is not, or at least is not only, a psychological drive that decreases the accuracy of our perceptions of the world, but is âessential to the logic of inquiry.â11 Facts can be interpreted, and indeed identified, only with the aid of hypotheses and theories. Pure empiricism is impossible, and it would be unwise to revise theories in the light of every bit of information that does not easily conform to them.12 No hypothesis can be expected to account for all the evidence, and if a prevailing view is supported by many theories and by a large pool of findings it should not be quickly altered. Too little rigidity can be as bad as too much.13
This is as true in the building of social and physical science as it is in policy-making.14 While it is terribly difficult to know when a finding throws serious doubt on accepted theories and should be followed up and when instead it was caused by experimental mistakes or minor errors in the theory, it is clear that scientists would make no progress if they followed Thomas Huxleyâs injunction to âsit down before fact as a mere child, be prepared to give up every preconceived notion, follow humbly wherever nature leads, or you will learn nothing.â15
As Michael Polanyi explains, âIt is true enough that the scientist must be prepared to submit at any moment to the adverse verdict of observational evidence. But not blindly⌠. There is always the possibility that, as in [the cases of the periodic system of elements and the quantum theory of light], a deviation may not affect the essential correctness of a propositionâŚ. The process of explaining away deviations is in fact quite indispensable to the daily routine of research,â even though this may lead to the missing of a great discovery.16 For example, in 1795, the astronomer Lalande did not follow up observations that contradicted the prevailing hypotheses and could have led him to discover the planet Neptune.17
Yet we should not be too quick to condemn such behavior. As Thomas Kuhn has noted, âThere is no such thing as research without counter-instances.â18 If a set of basic theoriesâwhat Kuhn calls a paradigmâhas been able to account for a mass of data, it should not be lightly trifled with. As Kuhn puts it: âLifelong resistance, particularly from those whose productive careers have committed them to an older tradition of normal science [i.e., science within the accepted paradigm], is not a violation of scientific standards but an index to the nature of scientific research itself. The source of resistance is the assurance that the older paradigm will ultimately solve all its problems, that nature can be shoved into the box the paradigm provides. Inevitably, at times of revolution, that assurance seems stubborn and pig-headed as indeed it sometimes becomes. But it is also something more. That same assurance is what makes normal science or puzzle-solving science possible.â19
Thus it is important to see that the dilemma of how âopenâ to be to new information is one that inevitably plagues any attempt at understanding in any field. Instances in which evidence seems to be ignored or twisted to fit the existing theory can often be explained by this dilemma instead of by illogical or nonlogical psychological pressures toward consistency. This is especially true of decision-makersâ attempts to estimate the intentions of other states, since they must constantly take account of the danger that the other state is trying to deceive them.
The theoretical framework discussed thus far, together with an examination of many cases, suggests Hypothesis 2: scholars and decision-makers are apt to err by being too wedded to the established view and too closed to new information, as opposed to being too willing to alter their theories.20 Another way of making this point is to argue that actors tend to establish their theories and expectations prematurely. In politics, of course, this is often necessary because of the need for action. But experimental evidence indicates that the same tendency also occurs on the unconscious level. Bruner and Postman found that âperhaps the greatest single barrier to the recognition of incongruous stimuli is the tendency for perceptual hypotheses to fixate after receiving a minimum of confirmation⌠. Once there had occurred in these cases a partial confirmation of the hypothesis ⌠it seemed that nothing could change the subjectâs report.â21
However, when we apply these and other findings to politics and discuss kinds of misperception, we should not quickly apply the label of cognitive distortion. We should proceed cautiously for two related reasons. The first is that the evidence available to decision-makers almost always permits several interpretations. It should be noted that there are cases of visual perception in which different stimuli can produce exactly the same pattern on an observerâs retina. Thus, for an observer using one eye the same pattern would be produced by a sphere the size of a golf ball which was quite close to the observer, by a base-ball-sized sphere that was further away, or by a basketball-sized sphere still further away. Without other clues, the observer cannot possibly determine which of these stimuli he is presented with, and we would not want to call his incorrect perceptions examples of distortion. Such cases, relatively rare in visual perception, are frequent in international relations. The evidence available to decision-makers is almost always very ambiguous since accurate clues to othersâ intentions are surrounded by noise22 and deception. In most cases, no matter how long, deeply, and âobjectivelyâ the evidence is analyzed, people can differ in their interpretations, and there are no general rules to indicate who is correct.
The second reason to avoid the label of cognitive distortion is that the distinction between perception and judgment, obscure enough in individual psychology, is almost absent in the making of inferences in international politics. Decision-makers who reject information that contradicts their viewsâor who develop complex interpretations of itâoften do so consciously and explicitly. Since the evidence available contains contradictory information, to make any inferences requires that much information be ignored or given interpretations that will seem tortuous to those who hold a different position.
Indeed, if we consider only the evidence available to a decision-maker at the time of decision, the view later proved incorrect may be supported by as much...