CHAPTER 1
WHY DOES INTELLIGENCE FAIL, AND HOW CAN IT SUCCEED?
AMONG INTELLIGENCE PROFESSIONALS, the concept of intelligence failure is a sore subject. This is not surprising, because many people assume that when intelligence fails, it is because an intelligence officer or analyst has done a poor job. But for many in the intelligence business and in the academic field of intelligence studies, this is not necessarily the case: Intelligence can fail for many reasons, often despite the best work of intelligence professionals. Former US Marine Corps intelligence director Lieutenant General Paul Van Riper echoed the feelings of many in the intelligence community when he lamented after 9/11: âThe Intelligence Community does a damn good job. It troubles me that people always speak in terms of operational successes and intelligence failures.â1
But whether or not intelligence personnel or organizations are unfairly or too frequently blamed for mistakes, the subject of intelligence failure is widely studied and debated. In the words of one expert, âThe study of intelligence failures is perhaps the most academically advanced field in the study of intelligence.â2 Numerous studies have been produced examining various aspects of intelligence failure, such as the inability to provide sufficient warning of surprise attack. Much of this literature leads to the depressing conclusion thatâas Richard Betts put it in a classic articleâintelligence failures are inevitable.3
Intelligence failures can take many forms, but a common theme in major intelligence failures is that decision makers have been surprised. For politicians, senior military officers, and other leaders, surprise is usually a bad thing, and they often count on intelligence agencies to help them avoid it. The most significant surprisesâthe sorts of events that are sometimes called black swansâare known to military and national security analysts as strategic surprises. Scholars of strategic surprise have examined the failure of intelligence services to prevent or understand a wide variety of phenomena that pose a threat to national security, such as American intelligenceâs inability to foresee the fall of the shah of Iran or to understand the nature of Iraqâs weapons of mass destruction programs before the United States-led invasion in 2003.
Given the great deal of attention paid to the topic of intelligence failure, it may seem surprising that there is little agreement in the intelligence literature on just what is meant by an âintelligence failure.â Mark Lowenthal, a former senior CIA officer, puts the focus on intelligence agencies: âAn intelligence failure is the inability of one or more parts of the intelligence processâcollection, evaluation and analysis, production, disseminationâto produce timely, accurate intelligence on an issue or event of importance to national interests.â4 Others argue that failures can be committed by policymakers and other senior officials, who either neglect or misuse the intelligence they are given. Abram N. Shulsky and Gary J. Schmitt focus on these officials who receive intelligence, writing: âAn intelligence failure is essentially a misunderstanding of the situation that leads a government (or its military forces) to take actions that are inappropriate and counterproductive to its own interests.â5 A better definition of intelligence failure combines these two concepts; failures can involve either a failure of the intelligence community to produce the intelligence needed by decision makers, or a failure on the part of decision makers to act on that intelligence appropriately.
This book focuses on what is by far the most widely studied type of intelligence failure: the failure to detect and prevent a surprise attack from a military, terrorist, or other enemy. But one of the central arguments of this book is that we spend too much time studying and worrying about intelligence failure, and we should instead be thinking about intelligence success. Before we can get there, howeverâbefore we can understand what makes intelligence succeedâwe need to better understand why intelligence fails. This chapter reviews the conventional understanding of why intelligence fails, explains how this understanding falls short, and introduces my argument about intelligence and preventive action.
WHY DOES INTELLIGENCE FAIL?
As noted in this bookâs introduction, most scholars and practitioners who write about intelligence agree that failures usually happen because intelligence agencies and analysts fail to understand signals and warnings that were right in front of them all the time. They refer to this problem as an inability on the part of the intelligence authorities to âconnect the dotsâ of existing information. They conclude that the problem is not in collecting the dotsâgathering the intelligence in the first place. Instead, for psychological, organizational, or other reasons, intelligence officialsâeven when they are competent and trying hardâfail to understand the importance of the information (the âdotsâ) they have.
Although this explanation of the problem may seem obvious, other explanations for failure are possible. For example, it could be that intelligence fails to warn of an attack or other disastrous event because there simply are not enough clues to go onânot enough dots to connect. Or it might not matter very much how much intelligence is available, if the responsible officials are incompetent. This latter explanation was a major conclusion of the Joint Congressional Committee that investigated the Pearl Harbor disaster. The committee had set out to answer the question: âWhy, with some of the finest intelligence available in our history, with the almost certain knowledge that war was at hand, with plans that contemplated the precise type of attack that was executed by Japan on the morning of December 7âwhy was it possible for a Pearl Harbor to occur?â6 The committee answered its own question, finding that the disaster resulted from errors by the military commanders in Hawaii and from organizational deficiencies in the American military.7
When Roberta Wohlstetter published her study of Pearl Harbor in 1962, however, she made a different argument, one that has come to be accepted not only as the conventional wisdom about that disaster but also more generally as a broad theoretical explanation for intelligence failures and surprise attacks. She argued that the problem was not that the military commanders were incompetent, or that their intelligence staffs failed in their duties to collect intelligence about the threat from Japan. Instead, the problem lay in the analysis of the intelligence that was available.8 The signals that could have alerted the American forces to the danger of an attack on Hawaii were lost amid the far larger quantity of unrelated, contradictory, and confusing noise.
Wohlstetterâs explanation for intelligence failure remains widely accepted today, and it can be seen in after-the-fact analyses of most failures and disasters, which find that such events could have been prevented if only we had paid better attention to or had been able to better process the volume of information and warnings that were available. This was the conclusion of the White House review after the Christmas Day 2009 attempt to blow up an airliner as it approached Detroit. More recently, after US Army major Nidal Hasan killed thirteen people at Fort Hood, Texas, senators Joseph Lieberman and Susan Collins argued that these deaths could have been prevented. The Department of Defense and the Federal Bureau of Investigation (FBI), they wrote, âcollectively had sufficient information to have detected Hasanâs radicalization to violent Islamist extremism but failed both to understand and to act on it.â9 Even the turmoil and unrest that rocked much of the Middle East in early 2011, it has been claimed, could have been foreseen if only the warnings from some experts had been listened to.10
This is the conventional wisdom about what happens in cases of intelligence failure: It happens despiteâand to some extent because ofâthe presence of abundant clues about the problems on the horizon, as dots are not connected and valuable signals become lost amid the sea of extraneous noise. This explains what happens. But to explain why intelligence officials and decision makers fail to understand the available intelligence, two primary schools of thought have developed: the traditional school and the reformist school.
The Traditional School
In her book on Pearl Harbor, Roberta Wohlstetter not only established the conventional wisdom about signals versus noise; she also laid the groundwork for what would become the majority view among scholars and practitioners about the causes of intelligence failures. One of the most striking aspects of this viewâwhich I call the traditional schoolâis its pessimism. Wohlstetterâs analysis of Pearl Harbor convinced her that the task of intelligence is intrinsically difficult, and as a result she believed that intelligence performance was not likely to get much better in the future. Writing at the beginning of the computer age, she argued that if anything, future developments in information processing would make surprise attacks even more likely: âIn spite of the vast increase in expenditures for collecting and analyzing intelligence data and in spite of advances in the art of machine decoding and machine translation, the balance of advantage seems clearly to have shifted since Pearl Harbor in favor of a surprise attacker.â11
This pessimistic view might sound unsurprising today, when major intelligence failures and surprises seem to arise nearly every year. But Wohlstetterâs argument was a sharp corrective to what had until then been a widely held understanding about intelligence and the growing American intelligence system. This earlier view dates back to Sherman Kent, the Yale professor and long-serving senior CIA official who has been described as the dean of intelligence analysis. Kent saw intelligence as a form of academics that could be done well, if performed by the best minds applying rigorous social science methods.12 But Kentâs optimistic view was countered by Wohlstetterâs pessimistic analysis, which suggested that intelligence failure might in fact be unavoidable.
Wohlstetterâs view became the dominant one among the relatively small community of scholars who studied intelligence matters during the Cold War. Richard Betts made the case for this traditional view in his much-cited 1978 article, in which he wrote that âintelligence failures are not only inevitable, they are natural.â13 And because these failures are natural, traditionalists do not believe that intelligence officials should be held responsible for most failures. Betts wrote that there would always be some warning evident as tensions increase before a surprise attack; there are, he wrote in a comment frequently heard among traditional theorists of intelligence failure, no significant âbolts from the blue.â14 But at the same time, these thinkers tend to argue that none of these warnings, even when considered after the fact, can be considered clear and definitive warnings of what was to come; thus it is not surprising that analysts would have missed what later appeared quite clear.
If anyone is responsible for intelligence failure, traditionalists believe, it is policymakers, who too often fail to take the advice given by intelligence professionals. Betts wrote that âthe principal cause of surprise is not the failure of intelligence but the unwillingness of political leaders to believe intelligence or to react to it with sufficient dispatch.â15 Michael Handel, another prominent student of intelligence failure, agreed with Betts that the most common culprit was the decision maker. Handel saw intelligence work as divided into the three levels of acquisition, analysis, and acceptance; and in this regard he observed that âhistorical experience confirms that intelligence failures were more often caused by a breakdown on the level of acceptance than on the acquisition or analysis levels.â16
For Wohlstetter and other traditionalist scholars who studied the problem of surprise attack during the Cold War, the key problem for intelligence lay in a faulty analysis of the available information and not in the collection of that information in the first place. But why was intelligence analysis so often faulty? Although Wohlstetter did not offer any deeper answer to this question, later scholars in the traditional school found that problems of human psychology and cognition appeared to be at the root of the problem. Betts, for example, studied surprise attacks ranging from World War II through the Korean War to the 1973 Yom Kippur War, and he found that in most cases someone was ringing the alarm but it was not heard. The problem, he believed, was usually that there existed a conceptual consensus among decision makers that rejected the alarm, or else false alarms had dulled the impact of the alarm at the moment of crisis.17 Handel also felt the most common cause of intelligence failure was based in the psychological limitations of human nature: âMost intelligence failures occur because intelligence analysts and decisionmakers refuse to adapt their concepts to new information.â18 Richards Heuer has offered what may be the most comprehensive statement of this approach in his Psychology of Intelligence Analysis, in which he argues that many intelligence failures are caused by mental mindsets and assumptions that are resistant to change, and by cognitive biasesâthat is, subconscious mental shortcuts and strategies that lead to faulty judgments.19
This emphasis on psychological and cognitive factors may help us understand why this school of thought tends to see intelligence failure as largely unavoidable. Just as human nature and patterns of cognition may be resistant to change, psychological limitations on intelligence maybe resistant to improvement. Betts, for example, noted that âunlike organizational structure, ⊠cognition cannot be altered by legislation.â20 In 1964 Klaus Knorr made an argument that has since been echoed by a number of others: âIt seems clear that the practical problem is...