Intelligence and Surprise Attack
eBook - ePub

Intelligence and Surprise Attack

Failure and Success from Pearl Harbor to 9/11 and Beyond

Erik J. Dahl

Share book
  1. 288 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Intelligence and Surprise Attack

Failure and Success from Pearl Harbor to 9/11 and Beyond

Erik J. Dahl

Book details
Book preview
Table of contents

About This Book

How can the United States avoid a future surprise attack on the scale of 9/11 or Pearl Harbor, in an era when such devastating attacks can come not only from nation states, but also from terrorist groups or cyber enemies?

Intelligence and Surprise Attack examines why surprise attacks often succeed even though, in most cases, warnings had been available beforehand. Erik J. Dahl challenges the conventional wisdom about intelligence failure, which holds that attacks succeed because important warnings get lost amid noise or because intelligence officials lack the imagination and collaboration to "connect the dots" of available information. Comparing cases of intelligence failure with intelligence success, Dahl finds that the key to success is not more imagination or better analysis, but better acquisition of precise, tactical-level intelligence combined with the presence of decision makers who are willing to listen to and act on the warnings they receive from their intelligence staff.

The book offers a new understanding of classic cases of conventional and terrorist attacks such as Pearl Harbor, the Battle of Midway, and the bombings of US embassies in Kenya and Tanzania. The book also presents a comprehensive analysis of the intelligence picture before the 9/11 attacks, making use of new information available since the publication of the 9/11 Commission Report and challenging some of that report's findings.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Intelligence and Surprise Attack an online PDF/ePUB?
Yes, you can access Intelligence and Surprise Attack by Erik J. Dahl in PDF and/or ePUB format, as well as other popular books in Politics & International Relations & National Security. We have over one million books available in our catalogue for you to explore.



AMONG INTELLIGENCE PROFESSIONALS, the concept of intelligence failure is a sore subject. This is not surprising, because many people assume that when intelligence fails, it is because an intelligence officer or analyst has done a poor job. But for many in the intelligence business and in the academic field of intelligence studies, this is not necessarily the case: Intelligence can fail for many reasons, often despite the best work of intelligence professionals. Former US Marine Corps intelligence director Lieutenant General Paul Van Riper echoed the feelings of many in the intelligence community when he lamented after 9/11: “The Intelligence Community does a damn good job. It troubles me that people always speak in terms of operational successes and intelligence failures.”1
But whether or not intelligence personnel or organizations are unfairly or too frequently blamed for mistakes, the subject of intelligence failure is widely studied and debated. In the words of one expert, “The study of intelligence failures is perhaps the most academically advanced field in the study of intelligence.”2 Numerous studies have been produced examining various aspects of intelligence failure, such as the inability to provide sufficient warning of surprise attack. Much of this literature leads to the depressing conclusion that—as Richard Betts put it in a classic article—intelligence failures are inevitable.3
Intelligence failures can take many forms, but a common theme in major intelligence failures is that decision makers have been surprised. For politicians, senior military officers, and other leaders, surprise is usually a bad thing, and they often count on intelligence agencies to help them avoid it. The most significant surprises—the sorts of events that are sometimes called black swans—are known to military and national security analysts as strategic surprises. Scholars of strategic surprise have examined the failure of intelligence services to prevent or understand a wide variety of phenomena that pose a threat to national security, such as American intelligence’s inability to foresee the fall of the shah of Iran or to understand the nature of Iraq’s weapons of mass destruction programs before the United States-led invasion in 2003.
Given the great deal of attention paid to the topic of intelligence failure, it may seem surprising that there is little agreement in the intelligence literature on just what is meant by an “intelligence failure.” Mark Lowenthal, a former senior CIA officer, puts the focus on intelligence agencies: “An intelligence failure is the inability of one or more parts of the intelligence process—collection, evaluation and analysis, production, dissemination—to produce timely, accurate intelligence on an issue or event of importance to national interests.”4 Others argue that failures can be committed by policymakers and other senior officials, who either neglect or misuse the intelligence they are given. Abram N. Shulsky and Gary J. Schmitt focus on these officials who receive intelligence, writing: “An intelligence failure is essentially a misunderstanding of the situation that leads a government (or its military forces) to take actions that are inappropriate and counterproductive to its own interests.”5 A better definition of intelligence failure combines these two concepts; failures can involve either a failure of the intelligence community to produce the intelligence needed by decision makers, or a failure on the part of decision makers to act on that intelligence appropriately.
This book focuses on what is by far the most widely studied type of intelligence failure: the failure to detect and prevent a surprise attack from a military, terrorist, or other enemy. But one of the central arguments of this book is that we spend too much time studying and worrying about intelligence failure, and we should instead be thinking about intelligence success. Before we can get there, however—before we can understand what makes intelligence succeed—we need to better understand why intelligence fails. This chapter reviews the conventional understanding of why intelligence fails, explains how this understanding falls short, and introduces my argument about intelligence and preventive action.


As noted in this book’s introduction, most scholars and practitioners who write about intelligence agree that failures usually happen because intelligence agencies and analysts fail to understand signals and warnings that were right in front of them all the time. They refer to this problem as an inability on the part of the intelligence authorities to “connect the dots” of existing information. They conclude that the problem is not in collecting the dots—gathering the intelligence in the first place. Instead, for psychological, organizational, or other reasons, intelligence officials—even when they are competent and trying hard—fail to understand the importance of the information (the “dots”) they have.
Although this explanation of the problem may seem obvious, other explanations for failure are possible. For example, it could be that intelligence fails to warn of an attack or other disastrous event because there simply are not enough clues to go on—not enough dots to connect. Or it might not matter very much how much intelligence is available, if the responsible officials are incompetent. This latter explanation was a major conclusion of the Joint Congressional Committee that investigated the Pearl Harbor disaster. The committee had set out to answer the question: “Why, with some of the finest intelligence available in our history, with the almost certain knowledge that war was at hand, with plans that contemplated the precise type of attack that was executed by Japan on the morning of December 7—why was it possible for a Pearl Harbor to occur?”6 The committee answered its own question, finding that the disaster resulted from errors by the military commanders in Hawaii and from organizational deficiencies in the American military.7
When Roberta Wohlstetter published her study of Pearl Harbor in 1962, however, she made a different argument, one that has come to be accepted not only as the conventional wisdom about that disaster but also more generally as a broad theoretical explanation for intelligence failures and surprise attacks. She argued that the problem was not that the military commanders were incompetent, or that their intelligence staffs failed in their duties to collect intelligence about the threat from Japan. Instead, the problem lay in the analysis of the intelligence that was available.8 The signals that could have alerted the American forces to the danger of an attack on Hawaii were lost amid the far larger quantity of unrelated, contradictory, and confusing noise.
Wohlstetter’s explanation for intelligence failure remains widely accepted today, and it can be seen in after-the-fact analyses of most failures and disasters, which find that such events could have been prevented if only we had paid better attention to or had been able to better process the volume of information and warnings that were available. This was the conclusion of the White House review after the Christmas Day 2009 attempt to blow up an airliner as it approached Detroit. More recently, after US Army major Nidal Hasan killed thirteen people at Fort Hood, Texas, senators Joseph Lieberman and Susan Collins argued that these deaths could have been prevented. The Department of Defense and the Federal Bureau of Investigation (FBI), they wrote, “collectively had sufficient information to have detected Hasan’s radicalization to violent Islamist extremism but failed both to understand and to act on it.”9 Even the turmoil and unrest that rocked much of the Middle East in early 2011, it has been claimed, could have been foreseen if only the warnings from some experts had been listened to.10
This is the conventional wisdom about what happens in cases of intelligence failure: It happens despite—and to some extent because of—the presence of abundant clues about the problems on the horizon, as dots are not connected and valuable signals become lost amid the sea of extraneous noise. This explains what happens. But to explain why intelligence officials and decision makers fail to understand the available intelligence, two primary schools of thought have developed: the traditional school and the reformist school.

The Traditional School

In her book on Pearl Harbor, Roberta Wohlstetter not only established the conventional wisdom about signals versus noise; she also laid the groundwork for what would become the majority view among scholars and practitioners about the causes of intelligence failures. One of the most striking aspects of this view—which I call the traditional school—is its pessimism. Wohlstetter’s analysis of Pearl Harbor convinced her that the task of intelligence is intrinsically difficult, and as a result she believed that intelligence performance was not likely to get much better in the future. Writing at the beginning of the computer age, she argued that if anything, future developments in information processing would make surprise attacks even more likely: “In spite of the vast increase in expenditures for collecting and analyzing intelligence data and in spite of advances in the art of machine decoding and machine translation, the balance of advantage seems clearly to have shifted since Pearl Harbor in favor of a surprise attacker.”11
This pessimistic view might sound unsurprising today, when major intelligence failures and surprises seem to arise nearly every year. But Wohlstetter’s argument was a sharp corrective to what had until then been a widely held understanding about intelligence and the growing American intelligence system. This earlier view dates back to Sherman Kent, the Yale professor and long-serving senior CIA official who has been described as the dean of intelligence analysis. Kent saw intelligence as a form of academics that could be done well, if performed by the best minds applying rigorous social science methods.12 But Kent’s optimistic view was countered by Wohlstetter’s pessimistic analysis, which suggested that intelligence failure might in fact be unavoidable.
Wohlstetter’s view became the dominant one among the relatively small community of scholars who studied intelligence matters during the Cold War. Richard Betts made the case for this traditional view in his much-cited 1978 article, in which he wrote that “intelligence failures are not only inevitable, they are natural.”13 And because these failures are natural, traditionalists do not believe that intelligence officials should be held responsible for most failures. Betts wrote that there would always be some warning evident as tensions increase before a surprise attack; there are, he wrote in a comment frequently heard among traditional theorists of intelligence failure, no significant “bolts from the blue.”14 But at the same time, these thinkers tend to argue that none of these warnings, even when considered after the fact, can be considered clear and definitive warnings of what was to come; thus it is not surprising that analysts would have missed what later appeared quite clear.
If anyone is responsible for intelligence failure, traditionalists believe, it is policymakers, who too often fail to take the advice given by intelligence professionals. Betts wrote that “the principal cause of surprise is not the failure of intelligence but the unwillingness of political leaders to believe intelligence or to react to it with sufficient dispatch.”15 Michael Handel, another prominent student of intelligence failure, agreed with Betts that the most common culprit was the decision maker. Handel saw intelligence work as divided into the three levels of acquisition, analysis, and acceptance; and in this regard he observed that “historical experience confirms that intelligence failures were more often caused by a breakdown on the level of acceptance than on the acquisition or analysis levels.”16
For Wohlstetter and other traditionalist scholars who studied the problem of surprise attack during the Cold War, the key problem for intelligence lay in a faulty analysis of the available information and not in the collection of that information in the first place. But why was intelligence analysis so often faulty? Although Wohlstetter did not offer any deeper answer to this question, later scholars in the traditional school found that problems of human psychology and cognition appeared to be at the root of the problem. Betts, for example, studied surprise attacks ranging from World War II through the Korean War to the 1973 Yom Kippur War, and he found that in most cases someone was ringing the alarm but it was not heard. The problem, he believed, was usually that there existed a conceptual consensus among decision makers that rejected the alarm, or else false alarms had dulled the impact of the alarm at the moment of crisis.17 Handel also felt the most common cause of intelligence failure was based in the psychological limitations of human nature: “Most intelligence failures occur because intelligence analysts and decisionmakers refuse to adapt their concepts to new information.”18 Richards Heuer has offered what may be the most comprehensive statement of this approach in his Psychology of Intelligence Analysis, in which he argues that many intelligence failures are caused by mental mindsets and assumptions that are resistant to change, and by cognitive biases—that is, subconscious mental shortcuts and strategies that lead to faulty judgments.19
This emphasis on psychological and cognitive factors may help us understand why this school of thought tends to see intelligence failure as largely unavoidable. Just as human nature and patterns of cognition may be resistant to change, psychological limitations on intelligence maybe resistant to improvement. Betts, for example, noted that “unlike organizational structure, 
 cognition cannot be altered by legislation.”20 In 1964 Klaus Knorr made an argument that has since been echoed by a number of others: “It seems clear that the practical problem is...

Table of contents