Three Tweets to Midnight
eBook - ePub

Three Tweets to Midnight

Effects of the Global Information Ecosystem on the Risk of Nuclear Conflict

Herbert S. Lin, Benjamin Loehrke, Harold A. Trinkunas, Herbert S. Lin, Benjamin Loehrke, Harold A. Trinkunas

Compartir libro
  1. 248 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

Three Tweets to Midnight

Effects of the Global Information Ecosystem on the Risk of Nuclear Conflict

Herbert S. Lin, Benjamin Loehrke, Harold A. Trinkunas, Herbert S. Lin, Benjamin Loehrke, Harold A. Trinkunas

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

Disinformation and misinformation have always been part of conflict. But as the essays in this volume outline, the rise of social media and the new global information ecosystem have created conditions for the spread of propaganda like never before—with potentially disastrous results.

In our "post-truth" era of bots, trolls, and intemperate presidential tweets, popular social platforms like Twitter and Facebook provide a growing medium for manipulation of information directed to individuals, institutions, and global leaders. A new type of warfare is being fought online each day, often in 280 characters or fewer. Targeted influence campaigns have been waged in at least forty-eight countries so far. We've entered an age where stability during an international crisis can be deliberately manipulated at greater speed, on a larger scale, and at a lower cost than at any previous time in history.

This volume examines the current reality from a variety of angles, considering how digital misinformation might affect the likelihood of international conflict and how it might influence the perceptions and actions of leaders and their publics before and during a crisis. It sounds the alarm about how social media increases information overload and promotes "fast thinking, " with potentially catastrophic results for nuclear powers.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Three Tweets to Midnight un PDF/ePUB en línea?
Sí, puedes acceder a Three Tweets to Midnight de Herbert S. Lin, Benjamin Loehrke, Harold A. Trinkunas, Herbert S. Lin, Benjamin Loehrke, Harold A. Trinkunas en formato PDF o ePUB, así como a otros libros populares de Politique et relations internationales y Politique de communication publique. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Chapter 1
Retweets to Midnight:
Assessing the Effects of the Information Ecosystem on Crisis Decision Making between Nuclear Weapons States
Danielle Jablanski, Herbert S. Lin, and Harold A. Trinkunas
What if the Cuban Missile Crisis had taken place in today’s global information environment, characterized by the emergence of social media as a major force amplifying the effects of information on both leaders and citizens? President Kennedy might not have had days to deliberate with the Executive Committee of the National Security Council before delivering a measured speech announcing to the world the discovery of Soviet medium- and intermediate-range nuclear-armed missiles in Cuba.1
Nongovernmental open source intelligence organizations like Bellingcat could have used commercially available satellite imagery to detect the presence of these missiles and publicize them to the world on October 12, 1962, four days earlier than the president did. Imagine pictures of the missile sites going viral on social media, alarming millions around the world. Imagine that these real-time images were accompanied by deliberate information operations from adversaries seeking to cast doubt on the facts to sow confusion and cause paralysis among domestic populations and between NATO leaders, as well as by internet trolls promoting misinformation and reposting and propagating tailored information leaks.
The shooting down of a U-2 spy plane over Cuba might have been news within the hour, becoming the subject of numerous tweets and relentless commentary on Facebook and other platforms. When the Joint Chiefs of Staff’s recommendation to invade Cuba was overruled by President Kennedy, “alt” social media accounts that served as fronts for disgruntled Pentagon officials might have leaked the proposed invasion plan to induce the administration to reverse course on the chosen alternative—a blockade. Pressured by public opinion and the Joint Chiefs of Staff, President Kennedy might not have had the luxury of picking which of Premier Khrushchev’s letters to respond to, which the historical record shows helped to de-escalate the crisis. In this situation, which former secretary of defense William J. Perry has characterized as the closest the world has come to nuclear catastrophe, the current global information ecosystem could have magnified the risk of the conflict’s escalating into all-out nuclear war.2
Figure 1.1. Hypothetical tweet by President John F. Kennedy during Cuban Missile Crisis.
Source: Scott Sagan, “The Cuban Missile Crisis in the Age of Twitter,” lecture at Stanford’s Center for International Security and Cooperation, April 3, 2018.
What’s New? Characteristics of the Modern Information Ecosystem
Social media and the resulting dynamics for interpersonal interconnectivity have increased the volume and velocity of communication by orders of magnitude in the past decade. More information reaches more people in more places than ever before. Algorithms and business models based on advertising principles utilize troves of user data to draw aggregate inferences, which allow for microsegmentation of audiences and direct targeting of disinformation or misinformation. Mainstream-media outlets no longer serve their traditional role as gatekeepers with near-universal credibility.3 In this ecosystem, propaganda can rapidly spread far and wide, while efforts to correct false information are more expensive, often fall short, and frequently fail altogether.
Nor are all of the voices on social media authentic. Some inauthentic voices are those of paid human trolls, for example from the Internet Research Agency, revealed to have created and spread false information on behalf of the Russian government prior to the 2016 US presidential election.4 Others are Macedonian entrepreneurs who at one point discovered ways to monetize an affinity among some voters for fake news critical of Hillary Clinton.5 Some voices are not even human, as demonstrated by the introduction of “bots”—automated social media accounts designed to mimic human behavior online that further complicate our ability to discern fact from fiction within the ecosystem.
Rapid transmission of content and curated affinity networks polarize citizens around divisive issues and create waves of public opinion that can pressure leaders.6 So many different narratives emerge around complex events that polities splinter into their disparate informational universes, unable to agree on an underlying reality. Does this unprecedented availability of information and connectivity amplify the ability of actors to sow discord in the minds of the domestic publics and even the leadership of adversaries? Could these dynamics affect leaders and citizens to the degree that miscalculation or misperception can produce crisis instability ultimately leading to a nuclear exchange? Can governance mechanisms be designed and implemented that are capable of countering and combating the manipulation of information in this ecosystem?
This volume argues that the present information ecosystem increasingly poses risks for crisis stability. Manipulated information, either artificially constructed or adopted by a strong grassroots base, can be used by interested actors to generate pressure from various constituencies on leaders to act. At the same time, these leaders themselves face information overload and their ability to distinguish between true and false information may be impaired, especially if they are receiving information simultaneously from their own sources and other sources from within their constituencies. Such confusion can ultimately lead to inaction or bad decisions. Or, this environment might produce an accelerated reaction based on slanted or unanalyzed information. Most worrisome is the possibility that the rapid spread of disinformation or misinformation via social media may in the end distort the decision-making calculus of leaders during a crisis and thereby contribute to crisis instability in future conflicts, the effects of which could be most severe for nuclear weapons states.
The Psychology of Complex Decision Making and Nuclear Crisis
Many theories of deterrence rely on the rationality assumption, namely that a rational actor can be convinced that the cost-benefit ratio associated with initiating an attack is unfavorable due to a credible threat of retaliation by the adversary. The risk of a nuclear exchange during the Cold War led theorists to focus on how leaders might approach crises and what could be done to avert deterrence failure. This prompted debates about a range of putatively rational actions that nuclear states might engage in to build a reliable framework for deterrence: reassurances to allies by extending the nuclear umbrella, force postures designed to ensure a survivable retaliatory capability, credible signaling to convince adversaries that any attack would meet with massive retaliation, etc.7
But human decision makers are just that—human—and a great deal of psychological research in the past few decades has demonstrated the limits of rational thinking and decision making. Paul Slovic has written extensively about the human brain, decision making, and limits for comprehending the weight of decisions that could imperil large numbers of human lives. Various psychological processes come into play when considering a cognitive calculation on the value of lives lost in large numbers, including psychic numbing, tribalism, the prominence effect, imperative thinking, and victim blaming. As Slovic and Herbert Lin argue in chapter 3, this implies that leaders facing the task of making a decision on whether to order the use of nuclear weapons find it difficult to operate “rationally.”
Psychology also tells us that—more often than not—fast, intuitive judgements take precedence over slower, more analytical thinking. Fast thinking (also identified as System 1 thinking by the originator of the concept, Daniel Kahneman) is intuitive and heuristic, generating rapid, reflexive responses to various situations and—more often than not—useful in daily life. Slow thinking (also known by cognitive psychologists as System 2 thinking) is more conceptual and deliberative.8 Although both are useful in their appropriate roles, their operation in today’s information ecosystem can be problematic. “Fast thinking is problematic when when we are trying to understand how to respond to large-scale human crises, with catastrophic consequences,” Slovic and Lin write. “Slow thinking, too, can be incoherent in the sense that subtle influences—such as unstated, unconscious, or implicitly held attitudes—can lead to considered decisions that violate one’s strongly held values.” The prevalence of heuristic and “imperative thinking” among humans suggests that an overarching important goal, such as national defense in the face of a nuclear crisis, would likely eclipse consideration of second-order effects and consequences, such as the likelihood of massive loss of life on all sides or catastrophic effects on the global environment, to the extent that such discussion is actively, if not subconsciously, avoided.9
Observers have always anticipated that leaders would be under severe time pressures when deciding whether or not to use nuclear weapons, the most important of which is “launch on warning,” the pressure to launch fixed land-based ICBMs before they can be destroyed on the ground by incoming enemy warheads. Fast, reflexive thinking (i.e., System 1 thinking) is more likely to be used under the kind of pressure this scenario highlights. Against a ticking clock, combined with the difficulty of comprehending the consequences of nuclear conflict, the argument that rational and deliberate decision making and deterrence will likely prevail, particularly under the added weight of the misinformation and disinformation that might propagate through the global information ecosystem during a crisis, is a highly debatable proposition.
The possibility that decision makers may rely on incorrect perceptions of potential adversaries has long been an important critique of rational deterrence theory. International relations theorists such as Robert Jervis have argued that the failure of deterrence can frequently be attributed to misperception among leaders: of intentions, of capabilities, of the consequences of conflict, etc. This misperception can have its roots in leaders’ psychology, in lack of information, and in leaders’ assumptions about what information the other side has or how they in turn perceive the situation.10
In the 1980s, Jervis had already argued that misperception was a quite common cause for deterrence failure. In today’s global information ecosystem, there are more data available than ever before. But rather than reducing the likelihood of misperception through the greater availability of information about potential adversaries, the present information environment provides unprecedented opportunities for manipulation of leaders’ and publics’ perceptions about intentions, capabilities, and consequences of conflicts—cheaply, rapidly, and at scale.
Tools and Tactics in the Modern Information Ecosystem
Social media have emerged as a modern vehicle for changing narratives. Social media are arguably optimized to try to keep users in a “fast” pattern of thinking, promoting impulsive and intuitive responses to engage users emotionally and maximize both advertising revenue and user experience.11 This characteristic of social media platforms may also provide avenues by which these same users can be manipulated more effectively for political aims. Although the ability for propaganda to be both insidious and anonymous is not a new phenomenon, automation, algorithms, and big data are being employed by various actors to selectively amplify or suppress information viewed by hundreds of millions of people via social media and online networks.12 There is evidence of targeted influence campaigns in at least forty-eight countries to date.13 Facebook, YouTube, WhatsApp, Instagram, and Reddit have also been platforms for a variety of divisive information operations.
As Mark Kumleben and Samuel C. Woolley note in chapter 4, campaigns have often made use of networks made of “bots”—partially or wholly automated—to introduce and circulate false or malign information, craft and control the narrative at the outset of a real event, and depict a manufactured consensus base around an issue. For example, an estimated 15 percent of Twitter’s approximately 335 million users (as of 2018) are bots. Bots are employed as a tool to promote a mix of authentic and inauthentic content, automate the amplification of specific sources, disrupt and overwhelm channels and conversations with irrelevant noise, and harass individuals or groups online to silence or intimidate them.
Information operations have more than commercial or political/electoral implications. We are also witnessing an increase in states using such strategies to shape potential battlefields. Using the example of information operations against NATO, Kate Starbird employs a mixed-method analysis in chapter 5 of this v...

Índice