1
Introduction
In late 2014, I was asked by Jonathan Moreno to develop a small grant on neuroethics and national security. At first, this was simply a step in the ladder from postdoctoral fellow to assistant professor: a small amount of funding from the Greenwall Foundation to display on my CV. The money would largely go into the University of Pennsylvaniaâs coffers, but in exchange Iâd have greater status on the academic job market. I was familiar, I thought, with concerns about security and neuroethics. With rare exception, I thought, that literature was overwhelmingly concerned with function magnetic resonance imaging (fMRI) and âlie detection.â The first paper I wrote in graduate school had been on a technology referred to as âbrain computer interfacesâ (White, 2008), or BCIs, and their implications for military ethics (Evans, 2011), but I hadnât seen much on it since. And, of course, there was the human enhancement literature, but even the literature on military enhancement seemed largely concerned with questions of what was âhumanââquestions in which I had no interest.
What I wasnât expecting, as an Australian recently moved to the US, was meeting folks at the pointy end of neuroscience and national security.
Not long after, I was invited to a talk at Penn by William Casebeer. Casebeer is a former student of Phillip Kitcher, and like many of Philipâs students he is a careful, rigorous, and heavily naturalistic philosopher (Casebeer, 2003). But, in addition to those credentials, Casebeer, a retired Air Force Lieutenant Colonel, had taught philosophy at military academies around the country. In 2014, he was an outgoing program manager at the Defense Advanced Research Projects Agency (DARPA), the blue-sky research arm of the US Department of Defense (DOD).
And what Bill had to say had nothing to do with lie detectors.
Rather, Casebeerâs talk centered on a program he had developed at DARPA that sought to understand the neurobiological basis of narrative, and how the stories we tellâhow we convey information, not simply what information we conveyâinfluence us. This had promising applications in improving and streamlining complex warfighter training in the twenty-first century. But, Casebeer said, there was another application: detecting radicalization online by piecing together the kind, order, and method of delivery of propaganda.
I would later learn that the programs run by Casebeer during his tenure (DARPA Program Officers are moved on rapidly to keep the agency fresh) were only the tip of the iceberg. The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, launched during the Obama administration to rapidly develop Americaâs capacity in neuroscience, is heavily involved with the DOD. Of the $110 million or so budgeted at the outset of the BRAIN Initiative in 2013, almost one half was committed by DARPA (White House, 2013). That number has only grown over time, and DARPAâs involvement in neuroscience has deepened from the fruits of the BRAIN Initiative. The national security applications of neuroscience are now diverse, and include research into narratives, pharmacology, and the BCIs Iâd written on years before. Neuroscience is booming in national security, and its reach into the institution of national security is broad and deep.
Neuroscience is attractive in an operational sense for its potential capacity to detect terrorists, help the wounded walk again, and perhaps one day find a cure for post-traumatic stress disorder. It is also a point of âconvergenceâ among other sciences, including artificial intelligence, synthetic biology, and nanoscience. These convergent sciences and their technological applications promise a range of fantastic possibilities, including cures for a wide range of disease, sentient machines, the end of manual labor, and extremely long (or even indefinite) lifespans.
Yet, as with all new discoveries, these possibilities have deep ethical implications. By âethical,â I mean the decisions made before, during, and after the development of these technologies bear on questions about what kinds of values are important, and tradeoffs between values including but not limited to human and nonhuman welfare, equality, justice, and freedom. As national security applications, moreover, these technologies exist in a space in which otherwise impermissible acts, such as killing, may be available to states and their proxies in securing their interests. It is that intersection, between neuroscience and national security, that is the subject of this book.
1.1 National Security
Before moving into the meat of this work, some definitional concerns need to be addressed. The first is what I mean by ânational security.â National security, I take here, is a social institution of the modern nation state. By âsocial institution,â I mean one of a collection of organizations, policies, laws, and norms that fulfill an important moral end (Miller, 2010). Other institutions include healthcare, education, journalism, and the academy. Social institutions are an important level of analysis in ethics and political philosophy as contributors to, and instantiations of, a moral society. They are also the drivers in important decisions that influence the lives of millions of people.
It might be objected that national security isnât an institution so much as a collection of institutions. Criminal justice, one could argue, is a separate institution that protects the rights of citizens; state militaries are an institution that protects state sovereignty against external threats. This is a common distinction, but treating national security as a broad level of analysis is useful here for a couple of reasons. Importantly, national security is much broader than just these two organizations. It includes, for example, transnational law enforcement and intelligence operations within its conceptual bounds. What links all of these organizations together is a common telos, or end: the use of force to maintain the structure of a society. Whether the structure of a society is ultimately justified is another question, and one I wonât answer in detail here. Assuming, however, that our current societyâand here, I am primarily concerned with the USâis in part justified (e.g. as a liberal democracy), even if parts of it are decidedly immoral (e.g. as a nation with an unaccounted for colonial past, or a history of slavery it has yet to fully reckon with), national security is the basic social institution charged with ensuring that the moral project of a society is maintained. Importantly, moreover, national security is empowered to use force, including lethal force, to achieve that end.
Throughout, I will distinguish between different parts of the social institution of national security, as they achieve the central telos of the institution in different ways. However, I take this as partly a coordination problem given that national security interacts with different populations who may have different claims against it. In particular, the moral claims of other nation states and their resident populations are different from the claims of the local population of a state. However, the central aim of national security is maintaining the moral project of a particular nation state, and the organizations and roles within it can be taken as sharing a common moral end even if they approach this end in different ways. This is broadly analogous to the way that treating the institution of healthcare (Miller, 2010) must necessarily approach the roles of, inter alia, public health and clinical medicine, which interact in important ways but involve distinct moral commitments (Childress et al., 2002; Childress and Bernheim, 2003).
An important additional reason to treat national security writ large as the subject of analysis is that, in the aftermath of the attacks on the World Trade Center and Pentagon, and the crash of UA 93 in 2001, the organizations under the umbrella of national security have become progressively less distinct. The Federal Bureau of Investigation (FBI, 2020), for example, is charged with intelligence collection, national security, and law enforcement operations under its current public information page. The Department of Homeland Security, formed in the aftermath of the attacks of 2001, concerns itself with threats both foreign and domestic. Even the Department of Health and Human Services includes offices devoted to external threats, particularly those concerned with biological terrorism.
Morally, there is a difference between the acts of armed conflict, intelligence collection, and law enforcement (Evans et al., 2014). However, the prosecution of these acts is spread among a series of organizations that, here, I understand collectively as the institution of national security. This serves the important role of clarifying the different moral limits on these acts (and thus the organizations that perform them), and the historical story of howâfor better and for worseâthese organizations came to be. To understand the ethical issues neuroscience poses, we should treat all three broad classes of national security organizationâmilitary, intelligence, and criminal justiceâtogether.
1.2 Neuroscience
Neuroscience, like national security, requires some stipulation. By âneuroscience,â I am concerned with scientific inquiry into the functioning of the brain, and its relation to individual and collective human behavior. This includes inquiries we would conventionally mean by âneuroscience,â such as studying brains with large diagnostic devices such as fMRIs. But it also involves aspects of cognitive science, microbiology, clinical psychology, medicine, forensics, and even computer science.
Importantly, neuroscience is concerned not simply with brains, but also with minds, mental states, and cognition. The relation between these categories is famously contentious, especially in a world as interconnected and mediated by technology as our own. Neuroethics, in particular, has engaged substantively with the relationship between technologies that interact with human cognition with a range of degrees of directness. So a little more should be said here.
In general, I am skeptical of theories that posit the brain as the sole location of cognition, and the mind in general. It has been a very long time since humans relied exclusively on their brains for cognition, and that has only become more true in recent years, as information has exploded in both quantity and variety. I am thus a proponent of Clark and Chalmersâ extended mind thesis (1998), but moreover of Neil Levyâs extended cognition thesis, which holds that the structure of cognition is not exclusively located in the brain, but also in other objects (Levy, 2007).
I will not attempt to defend either of these theses, which have been interrogated at length by other authors. This view, however, has one particular advantage to it. Because the mind and cognition are not located exclusively in the brain, specific concerns about neuroscience and technology as âimpacting the mindâ are largely eschewed in this work. While this means that I have to ultimately answer questions about what makes these technologies worthy of distinct ethical concern (which I address in Chapter 6), it does mean that I am not terribly concerned about the mere fact that these technologies impact the mind. This is important, as some of the emerging insights from neuroscience that apply to national security do not express themselves as âneurotechnologiesâ qua technologies that use direct chemical or electrical intervention with neurons to influence the mind. Some technologies, such as BCIs or certain chemical weapons, do this; others, such as propaganda developed through neuroscientific insights into group behavior, do not. I consider both worthy of exploration, and indeed will show how more and less direct applications of neuroscience interact with national security, and each other.
1.3 Reality versus Hype
Many of the technologies I discuss have been in development for years or even decades, but have yet to be deployed as fully functional technologies. Concerns about suggestion and âmind control,â for example, date back half a century into the Cold War, but are being resurrected with new insights into the brain (Seed, 2011). That said, we havenât really achieved what we might call the cinematic version of mind control just yet. Itâs not clear if, or when, we will achieve such a capability. So a skeptic might say in response to novel neuroscience research âso what?â
We should take these technologies seriously, however, in a very specific sense. First, we know that groups want this technology and are willing to spend hundreds of millions of dollars to get it. Intelligence services, domestic and foreign-looking, want to know how to manipulate peopleâs thoughts to find actionable intelligence (Wurzman and Giordano, 2014). There are those in the criminal justice system who want new technologies that can verify whether a person is lying, or not, for purposes of securing a conviction (Dresser, 2008; Morse, 2018). This means that there is incentive, and action, to get to some of these technologies even if the science is still very much out.
A common refrain is that âchange is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult, and time-consuming.â This is the so-called âCollingridge dilemma,â named after technologist David Collingridge (1980), which tends to dog those who think about the ethics of technology. The purpose, then, of attending to the aims of these national security organizations is to attempt to decide what change is needed, while it is still easy to make that change. This is an important development, I take it, in approaching the ethics of technology even under conditions of great uncertainty.
But the other reason, however, is that weâre closer than you think. In 2009, I was writing about BCIs when the focus of the technology was still primarily animal studies (Evans, 2011). By 2015, there were people driving wheelchairs with BCIs, and one who has even flown a fighter jet (in simulation). Weâre not living in Ghost in the Shell yet, but our cyborg future is closer than it might seem in the news. There is thus more than enough science out there to start forming some interesting normative conclusions, and hopefully begin acting on those conclusions.
1.4 Structure of This Book
With that in mind, this book proceeds in three parts. The first part, bolstering this short introduction, deals with four classes of emerging neuroscience and technology that have applications in national security. The first of these are advances in behavior prediction. It is there that I will deal with, among other things, Casebeerâs brainchild, the N2, and its promise in detecting terrorists before they are radicalized.
Predicting behavior gives way to the possibility of controlling and modifying behavior. This has clear implications in forming a scientific basis for the interrogation and rehabilitation of detainees in armed conflict, counterterrorism, and law enforcement scenarios. It also has important implications for training a new generation of soldiers, and curing them of the psychological ills they often return with from battle. This chapter covers the state-of-the-art and the aspirational ends of military neuroscienceâs foray into the science of persuasion.
In Chapter 4, I turn to enhancement. Professional militaries worldwide are becoming older, and the demands of twenty-first century conflicts have extended expectations for military forces, and Special Operations Forces in particular. This has created a new incentive, in the age-old quest for a better warrior, to seek soldiers with enhanced cognition in addition to enhanced physiology. In this chapter, the promise of soldier enhancement is exploredâfrom the mundane and soon-to-be-used to the blue sky and far in the future. While the focus here is on enhancements achieved through advances in neuroscience and related disciplines, the strong links beyond body and mind are also explored.
The final chapter of Part I concludes our investigation with a view of new weapons technologies in principle achievable through advances in neuroscience. Two important areas are considered: lethal and nonlethal biochemical agents, and attacks on BCIs. In each case, the current development of these technologies is canvassed, and their future potential is outlined.
Part II begins with a discussion of how and why neuroethics has largely neglected concerns of national security. While there has been considerable attention paid to legal concerns around the domestic use of neuroscientific findings in criminal proceedings, comparatively little has been paid to direct use in law enforcement, and almost none to unique challenges in counterterrorism, intelligence collection, and armed conflict. I draw on previous work on ethics and national security to provide a progra...