Three Tweets to Midnight
eBook - ePub

Three Tweets to Midnight

Effects of the Global Information Ecosystem on the Risk of Nuclear Conflict

Herbert S. Lin, Benjamin Loehrke, Harold A. Trinkunas, Herbert S. Lin, Benjamin Loehrke, Harold A. Trinkunas

Share book
  1. 248 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Three Tweets to Midnight

Effects of the Global Information Ecosystem on the Risk of Nuclear Conflict

Herbert S. Lin, Benjamin Loehrke, Harold A. Trinkunas, Herbert S. Lin, Benjamin Loehrke, Harold A. Trinkunas

Book details
Book preview
Table of contents
Citations

About This Book

Disinformation and misinformation have always been part of conflict. But as the essays in this volume outline, the rise of social media and the new global information ecosystem have created conditions for the spread of propaganda like never before—with potentially disastrous results.

In our "post-truth" era of bots, trolls, and intemperate presidential tweets, popular social platforms like Twitter and Facebook provide a growing medium for manipulation of information directed to individuals, institutions, and global leaders. A new type of warfare is being fought online each day, often in 280 characters or fewer. Targeted influence campaigns have been waged in at least forty-eight countries so far. We've entered an age where stability during an international crisis can be deliberately manipulated at greater speed, on a larger scale, and at a lower cost than at any previous time in history.

This volume examines the current reality from a variety of angles, considering how digital misinformation might affect the likelihood of international conflict and how it might influence the perceptions and actions of leaders and their publics before and during a crisis. It sounds the alarm about how social media increases information overload and promotes "fast thinking, " with potentially catastrophic results for nuclear powers.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Three Tweets to Midnight an online PDF/ePUB?
Yes, you can access Three Tweets to Midnight by Herbert S. Lin, Benjamin Loehrke, Harold A. Trinkunas, Herbert S. Lin, Benjamin Loehrke, Harold A. Trinkunas in PDF and/or ePUB format, as well as other popular books in Politique et relations internationales & Politique de communication publique. We have over one million books available in our catalogue for you to explore.
Chapter 1
Retweets to Midnight:
Assessing the Effects of the Information Ecosystem on Crisis Decision Making between Nuclear Weapons States
Danielle Jablanski, Herbert S. Lin, and Harold A. Trinkunas
What if the Cuban Missile Crisis had taken place in today’s global information environment, characterized by the emergence of social media as a major force amplifying the effects of information on both leaders and citizens? President Kennedy might not have had days to deliberate with the Executive Committee of the National Security Council before delivering a measured speech announcing to the world the discovery of Soviet medium- and intermediate-range nuclear-armed missiles in Cuba.1
Nongovernmental open source intelligence organizations like Bellingcat could have used commercially available satellite imagery to detect the presence of these missiles and publicize them to the world on October 12, 1962, four days earlier than the president did. Imagine pictures of the missile sites going viral on social media, alarming millions around the world. Imagine that these real-time images were accompanied by deliberate information operations from adversaries seeking to cast doubt on the facts to sow confusion and cause paralysis among domestic populations and between NATO leaders, as well as by internet trolls promoting misinformation and reposting and propagating tailored information leaks.
The shooting down of a U-2 spy plane over Cuba might have been news within the hour, becoming the subject of numerous tweets and relentless commentary on Facebook and other platforms. When the Joint Chiefs of Staff’s recommendation to invade Cuba was overruled by President Kennedy, “alt” social media accounts that served as fronts for disgruntled Pentagon officials might have leaked the proposed invasion plan to induce the administration to reverse course on the chosen alternative—a blockade. Pressured by public opinion and the Joint Chiefs of Staff, President Kennedy might not have had the luxury of picking which of Premier Khrushchev’s letters to respond to, which the historical record shows helped to de-escalate the crisis. In this situation, which former secretary of defense William J. Perry has characterized as the closest the world has come to nuclear catastrophe, the current global information ecosystem could have magnified the risk of the conflict’s escalating into all-out nuclear war.2
Figure 1.1. Hypothetical tweet by President John F. Kennedy during Cuban Missile Crisis.
Source: Scott Sagan, “The Cuban Missile Crisis in the Age of Twitter,” lecture at Stanford’s Center for International Security and Cooperation, April 3, 2018.
What’s New? Characteristics of the Modern Information Ecosystem
Social media and the resulting dynamics for interpersonal interconnectivity have increased the volume and velocity of communication by orders of magnitude in the past decade. More information reaches more people in more places than ever before. Algorithms and business models based on advertising principles utilize troves of user data to draw aggregate inferences, which allow for microsegmentation of audiences and direct targeting of disinformation or misinformation. Mainstream-media outlets no longer serve their traditional role as gatekeepers with near-universal credibility.3 In this ecosystem, propaganda can rapidly spread far and wide, while efforts to correct false information are more expensive, often fall short, and frequently fail altogether.
Nor are all of the voices on social media authentic. Some inauthentic voices are those of paid human trolls, for example from the Internet Research Agency, revealed to have created and spread false information on behalf of the Russian government prior to the 2016 US presidential election.4 Others are Macedonian entrepreneurs who at one point discovered ways to monetize an affinity among some voters for fake news critical of Hillary Clinton.5 Some voices are not even human, as demonstrated by the introduction of “bots”—automated social media accounts designed to mimic human behavior online that further complicate our ability to discern fact from fiction within the ecosystem.
Rapid transmission of content and curated affinity networks polarize citizens around divisive issues and create waves of public opinion that can pressure leaders.6 So many different narratives emerge around complex events that polities splinter into their disparate informational universes, unable to agree on an underlying reality. Does this unprecedented availability of information and connectivity amplify the ability of actors to sow discord in the minds of the domestic publics and even the leadership of adversaries? Could these dynamics affect leaders and citizens to the degree that miscalculation or misperception can produce crisis instability ultimately leading to a nuclear exchange? Can governance mechanisms be designed and implemented that are capable of countering and combating the manipulation of information in this ecosystem?
This volume argues that the present information ecosystem increasingly poses risks for crisis stability. Manipulated information, either artificially constructed or adopted by a strong grassroots base, can be used by interested actors to generate pressure from various constituencies on leaders to act. At the same time, these leaders themselves face information overload and their ability to distinguish between true and false information may be impaired, especially if they are receiving information simultaneously from their own sources and other sources from within their constituencies. Such confusion can ultimately lead to inaction or bad decisions. Or, this environment might produce an accelerated reaction based on slanted or unanalyzed information. Most worrisome is the possibility that the rapid spread of disinformation or misinformation via social media may in the end distort the decision-making calculus of leaders during a crisis and thereby contribute to crisis instability in future conflicts, the effects of which could be most severe for nuclear weapons states.
The Psychology of Complex Decision Making and Nuclear Crisis
Many theories of deterrence rely on the rationality assumption, namely that a rational actor can be convinced that the cost-benefit ratio associated with initiating an attack is unfavorable due to a credible threat of retaliation by the adversary. The risk of a nuclear exchange during the Cold War led theorists to focus on how leaders might approach crises and what could be done to avert deterrence failure. This prompted debates about a range of putatively rational actions that nuclear states might engage in to build a reliable framework for deterrence: reassurances to allies by extending the nuclear umbrella, force postures designed to ensure a survivable retaliatory capability, credible signaling to convince adversaries that any attack would meet with massive retaliation, etc.7
But human decision makers are just that—human—and a great deal of psychological research in the past few decades has demonstrated the limits of rational thinking and decision making. Paul Slovic has written extensively about the human brain, decision making, and limits for comprehending the weight of decisions that could imperil large numbers of human lives. Various psychological processes come into play when considering a cognitive calculation on the value of lives lost in large numbers, including psychic numbing, tribalism, the prominence effect, imperative thinking, and victim blaming. As Slovic and Herbert Lin argue in chapter 3, this implies that leaders facing the task of making a decision on whether to order the use of nuclear weapons find it difficult to operate “rationally.”
Psychology also tells us that—more often than not—fast, intuitive judgements take precedence over slower, more analytical thinking. Fast thinking (also identified as System 1 thinking by the originator of the concept, Daniel Kahneman) is intuitive and heuristic, generating rapid, reflexive responses to various situations and—more often than not—useful in daily life. Slow thinking (also known by cognitive psychologists as System 2 thinking) is more conceptual and deliberative.8 Although both are useful in their appropriate roles, their operation in today’s information ecosystem can be problematic. “Fast thinking is problematic when when we are trying to understand how to respond to large-scale human crises, with catastrophic consequences,” Slovic and Lin write. “Slow thinking, too, can be incoherent in the sense that subtle influences—such as unstated, unconscious, or implicitly held attitudes—can lead to considered decisions that violate one’s strongly held values.” The prevalence of heuristic and “imperative thinking” among humans suggests that an overarching important goal, such as national defense in the face of a nuclear crisis, would likely eclipse consideration of second-order effects and consequences, such as the likelihood of massive loss of life on all sides or catastrophic effects on the global environment, to the extent that such discussion is actively, if not subconsciously, avoided.9
Observers have always anticipated that leaders would be under severe time pressures when deciding whether or not to use nuclear weapons, the most important of which is “launch on warning,” the pressure to launch fixed land-based ICBMs before they can be destroyed on the ground by incoming enemy warheads. Fast, reflexive thinking (i.e., System 1 thinking) is more likely to be used under the kind of pressure this scenario highlights. Against a ticking clock, combined with the difficulty of comprehending the consequences of nuclear conflict, the argument that rational and deliberate decision making and deterrence will likely prevail, particularly under the added weight of the misinformation and disinformation that might propagate through the global information ecosystem during a crisis, is a highly debatable proposition.
The possibility that decision makers may rely on incorrect perceptions of potential adversaries has long been an important critique of rational deterrence theory. International relations theorists such as Robert Jervis have argued that the failure of deterrence can frequently be attributed to misperception among leaders: of intentions, of capabilities, of the consequences of conflict, etc. This misperception can have its roots in leaders’ psychology, in lack of information, and in leaders’ assumptions about what information the other side has or how they in turn perceive the situation.10
In the 1980s, Jervis had already argued that misperception was a quite common cause for deterrence failure. In today’s global information ecosystem, there are more data available than ever before. But rather than reducing the likelihood of misperception through the greater availability of information about potential adversaries, the present information environment provides unprecedented opportunities for manipulation of leaders’ and publics’ perceptions about intentions, capabilities, and consequences of conflicts—cheaply, rapidly, and at scale.
Tools and Tactics in the Modern Information Ecosystem
Social media have emerged as a modern vehicle for changing narratives. Social media are arguably optimized to try to keep users in a “fast” pattern of thinking, promoting impulsive and intuitive responses to engage users emotionally and maximize both advertising revenue and user experience.11 This characteristic of social media platforms may also provide avenues by which these same users can be manipulated more effectively for political aims. Although the ability for propaganda to be both insidious and anonymous is not a new phenomenon, automation, algorithms, and big data are being employed by various actors to selectively amplify or suppress information viewed by hundreds of millions of people via social media and online networks.12 There is evidence of targeted influence campaigns in at least forty-eight countries to date.13 Facebook, YouTube, WhatsApp, Instagram, and Reddit have also been platforms for a variety of divisive information operations.
As Mark Kumleben and Samuel C. Woolley note in chapter 4, campaigns have often made use of networks made of “bots”—partially or wholly automated—to introduce and circulate false or malign information, craft and control the narrative at the outset of a real event, and depict a manufactured consensus base around an issue. For example, an estimated 15 percent of Twitter’s approximately 335 million users (as of 2018) are bots. Bots are employed as a tool to promote a mix of authentic and inauthentic content, automate the amplification of specific sources, disrupt and overwhelm channels and conversations with irrelevant noise, and harass individuals or groups online to silence or intimidate them.
Information operations have more than commercial or political/electoral implications. We are also witnessing an increase in states using such strategies to shape potential battlefields. Using the example of information operations against NATO, Kate Starbird employs a mixed-method analysis in chapter 5 of this v...

Table of contents