Insecurity and Emerging Biotechnology
eBook - ePub

Insecurity and Emerging Biotechnology

Governing Misuse Potential

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Insecurity and Emerging Biotechnology

Governing Misuse Potential

About this book

This book examines how emergent trends in innovation and its governance are raising new and old questions about how to control technology. It develops a new framework for understanding how emergent fields of science and technology emerge as security concerns; and the key challenges these fields pose from a global security perspective. The study focuses on the politics which have surrounded the emergent field of Synthetic Biology, a field which has become emblematic of both the potentials and limits of more preemptive approaches to governance. This highly accessible work will be of interest to both scholars and practitioners working on the ethical responsibilities of innovators and the assessment of emergent technology as well as the global governance of weapons.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Insecurity and Emerging Biotechnology by Brett Edwards in PDF and/or ePUB format, as well as other popular books in Politics & International Relations & Globalisation. We have over one million books available in our catalogue for you to explore.
© The Author(s) 2019
Brett EdwardsInsecurity and Emerging Biotechnologyhttps://doi.org/10.1007/978-3-030-02188-7_1
Begin Abstract

1. Introduction

Brett Edwards1
(1)
Department of Politics, Languages and International Studies, University of Bath, Bath, UK
Brett Edwards

Abstract

This chapter introduces the way in which emergent fields of innovation capture the imagination of scientists, policy-makers and publics. It focuses in particular on the rise of proliferation and militarisation concerns about emergent fields of technological innovation. Such concerns are not new; however, today’s anxieties reflect contemporary relationships between science, the state and the global order. A central argument of this chapter is that there is a need for new approaches to thinking about the scope, practices and broader politics of governance directed at contemporary techno-scientific fields. To this end, the book argues that we need to understand that policy-making in this area grapples with a number of distinct but interrelated types of problem related to defining the ethical responsibilities of innovators, predicting and managing the societal effects of emergent areas of innovation and managing competitive drives at the international level.

Keywords

DisarmamentInnovationExpertiseNew and emerging science and technology
End Abstract
Every so often, a story appears in the popular press which discusses the prospect of a specific scientific breakthrough or technological development being misused by terrorists, criminals or governments. Usually, such concerns exist as a vague anxiety and are accepted as an unwelcome but acceptable consequence of progress or are dismissed as hyperbole or misplaced sentimentality. Occasionally, however, innovators appear to take science in a direction which is beyond the pale. This leads to questions about how innovation should be stewarded in the pursuit of some vision of national or global security. It also leads to questions about the moral limits which should be placed upon human inquiry, and the more fundamental relationships between technology and humanity. These questions, or rather contemporary approaches to answering them, are at the centre of this book. This study places a particular emphasis on the challenges raised not only by technologies, but also more fundamentally by the systems we have built to produce them. Importantly, such questions tend to emerge in a tangle—something reflected, for example, in ongoing discussions about the military potentials of Artificial Intelligence (AI).
Recently, we have seen Google programmers (as well as Google’s public relations team) grapple with the issue of whether they should work on military projects. In this case, Google had taken a military contract from the US Department of Defence (DOD) as part of Project Maven, a US Intelligence Agency initiative which focused on developing links with the bourgeoning US AI industry. As part of this project, Google was to develop software which could automate some aspects of image analysis. A key challenge that US military intelligence has faced has been sorting through the huge volumes of footage which is collected by drones. As Marine Corps Col. Drew Cukor noted in an update on the project in July 2017:
‘You don’t buy AI like you buy ammunition
.There’s a deliberate workflow process and what the department has given us with its rapid acquisition authorities is an opportunity for about 36 months to explore what is governmental and [how] best to engage industry [to] advantage the taxpayer and the warfighter, who wants the best algorithms that exist to augment and complement the work he does.’ 1
The announcement of Google’s involvement with the project led to disquiet among employees. Despite early reassurances from Google leadership that the project would not develop technologies directly involved in targeted killing, a number of individuals resigned in protest. In April 2017, around 3000 Google employees also signed a letter which called for Google to cancel this contract, and also to pledge publicly not to build warfare technology. In response, Google produced a code of conduct which stated that they would not continue to work with the military on weapons projects directed at people, or which would contravene ‘widely accepted principles of international law and human rights.’ 2
This left the DOD with an acquisition problem, which the forces of market competition would undoubtedly solve. It also left programmers working in this area, who were not in principle opposed to the idea of working on weapons projects, in a quandary. Many US technologists may feel that they have a responsibility to help ensure US national security—and to help protect US service people on military operations around the globe. However, even then, it is apparent that this does not give carte blanche in ethical terms—conforming to the letter of the law may not be enough, if they can foresee their work being misused by the state they work for or by others.
Such concerns may have been allayed for some if they had come across an article published in Nature 3 as developments at Google unfolded, which argued that scientists needed to continue to work with the US military to further US national and, although more indirectly, international security. The piece noted that the US continued to compete with Russia and China in this area. It also argued that civilian work would potentially end up being exploited even if US programmers refused to work on government defence projects—the only difference would be that it would enter weapons technology through more indirect routes. The author of the piece noted that the developers of mass-produced remote-controlled drones, would not have envisioned the technology ending up in military applications or foreseen the way in which this technology would be hacked in battlefields around the world.
In the context of the inevitable exploitation and proliferation of technology, it was argued that the US AI community should continue to work with the US military, and encourage the government to exploit the technologies they were developing in an ‘ethical’ way. According to the piece then, it was up to developers to make up their own minds about whom they worked with, and on what projects, on a case-by-case basis, as the article noted:
Some proposals will be unethical. Some will be stupid. Some will be both. Where they see such proposals, researchers should oppose them.
This then placed the innovator and their apprehensions of the world at the centre of how the challenge posed by the omnipotence of technology was to be understood. Google would go on to claim in a publicly shared memo that they would not invest in:
Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints. 4
Why this is an admirable aspiration—it remains just that. Many in society are still in disagreement about the social effects of inventions even with the benefit of hindsight; from the motor car, to plastics, and nuclear weapons . With emergent technology comes even greater uncertainty and ambiguity. Framing the issue in terms of the potential apprehensions and associated ethical responsibilities of single inventors, research groups or corporations appears to arbitrarily narrow assessments to debates about what we can reasonably expect innovators to foresee, and their judgement of the potential benefits and harms. It also potentially contributes to the uncritical acceptance of broader structures of investment and oversight.
Despite this, our eye is often drawn to innovators at the cutting-edge when thinking about the problem of technology control. This is in part perhaps because of a broader philosophical disposition towards individualism, particularly in Western ethical thought. It also reflects broader cultural norms about innovators and the relationship they have with their creations, as well as norms relating to the allocation of proprietary rights to inventors which have increasingly come to dominate globally. It is also because scientists tend to be early advocates of emergent technology, and occasionally, key proponents for control.
Indeed, the biographies of innovators have taken on lives of their own—and have become heroic epics, as well as parables about the moral weight of discoveries upon inventors, hubris and at times greed. This has been acutely true for those involved in weapon development—from the machine gun to the Manhattan Project. At the same time, we have also seen scientists lead drives for restraint and prohibition. This includes scientists who sought to ban the bomb, scientists who campaigned for comprehensive biological and chemical weapon disarmament , as well as those who campaigned against Agent Orange and other environmental weapons . Today, technologists considering the ethics of the weaponisation of artificial intelligence are grappling with the same dilemmas that their predecessors did—mediated as they are by the time and place they find themselves in.
Indeed, briefly returning to the issue of military research at Google, a curious example of an act of protest would emerge. One resigning employee began a petition to rename a Google conference room after Dr Clara Immerwahr . Clara Immerwahr was a twentieth-century chemist who was married to Fritz Haber , famous for his Nobel Prize winning work on the industrial production of ammonia. Immerwahr committed suicide following an argument with her husband about his role in the German chemical warfare effort—shooting herself with Haber ’s military revolver. This story has been subject to several retellings over the years, 5 but as far as events at Google are concerned, we can be pretty clear on the type of point that reference to this tale was meant to make.
There is no denying that many scientists have risked and lost much in the name of principle. There is also a certain romance in these acts of heroism, and the Promethean dread and Faustian pacts of innovators. 6 And this way of thinking and writing about these problems is endemic. In the field of virology, for example, there have been concerns about certain lines of work on viruses with global pandemic potential such as avian influenza. These focus particularly on work which seeks to create laboratory versions of disease which are more deadly, and more difficult to treat—in order to stay ahead of natural evolutionary purposes, and develop a deeper understanding of underlying mechanisms of pathogenicity. 7 This had led to concerns framed in terms of public safety and proliferation risks, as well as questions about whether such work blurs the line between peaceful and offensive research. The work of specific scientists has become a flash-point of broader ongoing debates on the ethics of such work—and questions about the ethical responsibilities of scientists have been placed front and centre. 8
This means that it is particularly easy to forget that questions of ethics often extend far beyond the agency of individual innovators and the communities they work in. The innovator’s dilemma is embedded in broader questions about the appropriate role and goals of innovation within societies. This includes the need to balance precaution with militaristic, economic as well as exploratory drives. The way in which these questions are apprehended, and balances sought, varies between innovation communities and national contexts.
Such questions are also opened up much more readily in new fields of innovation than established ones. In part, this openness seems to stem from the sense of novelty associated with emergent technologies. This novelty has two dimensions. On the one hand, the breathless promissory discussion of emergent fields emphasises the powerful transformative potentials contained within them. Emergent fields are framed as having the potential to revolutio...

Table of contents

  1. Cover
  2. Front Matter
  3. 1. Introduction
  4. 2. The Three Paradoxes of Innovation Security
  5. 3. Synthetic Biology as a Techno-Scientific Field of Security Concern
  6. 4. Synthetic Biology and the Dilemmas of Innovation
  7. 5. Synthetic Biology and the Dilemmas of Innovation Governance
  8. 6. Synthetic Biology and Dilemmas of Insecurity
  9. 7. Conclusion
  10. Back Matter