Ethics and Autonomous Weapons
eBook - ePub

Ethics and Autonomous Weapons

Alex Leveringhaus

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Ethics and Autonomous Weapons

Alex Leveringhaus

Book details
Book preview
Table of contents
Citations

About This Book

This book is amongst the first academic treatments of the emerging debate on autonomous weapons. Autonomous weapons are capable, once programmed, of searching for and engaging a target without direct intervention by a human operator. Critics of these weapons claim that 'taking the human out-of-the-loop' represents a further step towards the de-humanisation of warfare, while advocates of this type of technology contend that the power of machine autonomy can potentially be harnessed in order to prevent war crimes. This book provides a thorough and critical assessment of these two positions. Written by a political philosopher at the forefront of the autonomous weapons debate, the book clearly assesses the ethical and legal ramifications of autonomous weapons, and presents a novel ethical argument against fully autonomous weapons.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Ethics and Autonomous Weapons an online PDF/ePUB?
Yes, you can access Ethics and Autonomous Weapons by Alex Leveringhaus in PDF and/or ePUB format, as well as other popular books in Politique et relations internationales & Politique militaire. We have over one million books available in our catalogue for you to explore.

Information

© The Editor(s) (if applicable) and The Author(s) 2016
Alex LeveringhausEthics and Autonomous Weapons10.1057/978-1-137-52361-7_1
Begin Abstract

1. Ethics and the Autonomous Weapons Debate

Alex Leveringhaus1
(1)
Manor Road Building, University of Oxford, Oxford, UK
Abstract
The introductory chapter offers an overview of the debate on autonomous weapons. It shows how the debate emerged, why it came about, and why it matters. It then considers the debate from the perspective of just war theory, giving a brief account of central ideas in the ethics of armed conflict. The chapter then makes a number of general remarks about the moral permissibility of weapons research.
End Abstract
Over the past couple of decades, we have witnessed remarkable advances in computer technology. The internet has become a constant feature of modern life. Smartphone apps now guide their users reliably through the bustling streets of modern cities. There is no sign that the pace of technological development is abating. The internet giant Google is not only developing apps that safely guide humans through cities; Google’s engineers are also working on programmes to safely guide cars to their destination, without a human driver!
Yet the risks and benefits associated with computer technology are not confined to the civilian sector. One must not forget that the internet was initially developed by and for the military. Likewise, new computer-based navigation systems are capable of guiding not only driverless cars to their destination but also unmanned military airplanes to their targets. Technological progress and the development, production, and deployment of new weapons systems go hand in hand. From a historical perspective, technology has changed the character of warfare. Conversely, the demands of warfare have often made possible technological innovation.
This book focuses on an important technological development likely to have a lasting impact on weapons technology: machine autonomy. There is now a lively debate on the implications of machine autonomy for weapons development to which this book contributes. Interestingly, this debate is not just confined to the ivory towers of academia but also features prominently in policy circles. At the time of writing, there were various campaigns underway to ban autonomous weapons. The possibility of such a ban has been discussed at the United Nations in Geneva in 2014 and 2015. In 2013, the UN Special Rapporteur on Extrajudicial, Arbitrary, and Summary Executions, Christof Heyns, published a much noted report on Lethal Autonomous Robots (LARS).1 In the report, Heyns does not call for an outright ban on autonomous weapons but a moratorium on their development. The time afforded by such a moratorium, Heyns argues, would enable those involved in the debate on autonomous weapons to clarify a number of issues in order to determine whether a ban was necessary.
Heyns’ call for a moratorium is revealing. Strikingly, one of the issues demanding clarification is the very definition of an autonomous weapon. Experts and laymen alike usually have a good idea of what, say, a landmine is. Similarly, most people would be able to distinguish a fighter jet from a civilian airliner. Certain weapons technologies have become so embedded in political culture across the world that people have a pretty good idea of the weapons available to their military. Not so in the case of autonomous weapons. Clearly, the lack of definitional and conceptual understanding of the subject matter poses a problem for any debate on emerging weapons technologies, be it academic or policy related. How can one discuss the ethical and legal issues arising from autonomous weapons if one has no idea what these weapons actually are? How can one ban a weapon when one does not know what it is? To complicate matters further, even those who seek a ban on autonomous weapons concede that (some) relevant systems have not been developed yet. This is because autonomous weapons technology represents a trend in future weapons research. This makes autonomous weapons extremely elusive.
A central aim of this book is to demystify the concept of an autonomous weapon. Fortunately, in trying to find out how best to define autonomous weapons, we do not start with nothing. Historically, the debate on autonomous weapons is related to the debate on drone technology. This is not surprising because existing drone systems are likely to provide the blueprint for future autonomous weapons. The significant feature of drones—which unnerves people—is that they are uninhabited. This means that, unlike a tank, submarine, or fighter jet, no human person is located inside the drone. I use the term ‘uninhabited’ deliberately. Often drones are described as unmanned systems. This can be misinterpreted as suggesting that there is no human involvement in drone operations. However, drones are remote controlled by a human operator. Hence, I think it is more accurate to say that drone warfare is uninhabited warfare, rather than unmanned warfare.
Autonomous weaponry, in a very basic sense, closes the gap between uninhabited and unmanned warfare. The worry fuelling the debate on autonomous weapons is that the role of the operator in uninhabited systems can be reduced up to a point where an autonomous weapon can engage a target without further human assistance. It is important to emphasise that an autonomous weapon still needs to be pre-programmed by a human operator. But advances in Artificial Intelligence (AI) programming techniques make it possible for the machine to operate self-sufficiently—that is, without further assistance or guidance from an operator—once it has been programmed. The US military operates with a useful distinction in this respect.2
1.
In-the-loop-systems: The operator is directly involved in the operation of the system by making all the decisions. The machine is remote controlled by the operator.
2.
On-the-loop systems: The operator has pre-programmed the machine and the machine can operate self-sufficiently. Nevertheless, the operator remains on stand-by and can potentially override the machine.
3.
Out-of-the-loop systems: The operator has pre-programmed the machine and the machine can operate self-sufficiently. The operator does not remain on stand-by.
Arguably, on-the-loop and out-of-the-loop systems are best classified as autonomous systems. Especially, out-of-the-loop systems would give rise to something resembling unmanned, rather than just uninhabited, warfare. Human operators are still involved in warfare, but given that autonomous machines can operate self-sufficiently, they are further removed from the battlefield—physically and psychologically—than the operator of an in-the-loop system.3
At a basic level, then, autonomous weapons seem to have three features: (1) they are uninhabited, (2) they need to be pre-programmed, and (3) they can, once pre-programmed, carry out more or less complex military acts without further assistance from an operator. Naturally, there is much more that can be said about the concept of an autonomous weapon. One question, for instance, is whether autonomous weapons really present anything new. Once pre-programmed, heavily automated weapons systems, such as missile defence systems, are already capable of carrying out complex tasks without direct guidance from an operator. And these systems, it is worthwhile pointing out, are perfectly legal. Automated missile defence systems, such as Israel’s Iron Dome, do not break any laws relating to targeting. So, what is new in the autonomous weapons debate? And why are some activists arguing for a ban? I tackle this and related questions further below and in the next chapter.
But this book is not just about conceptual questions. In addition, it provides a philosophical and ethical perspective on autonomous weapons. How should we judge autonomous weapons? Are they a good or bad thing? These questions are central to the autonomous weapons debate. For instance, roboticist Ronald Arkin, one of the main advocates of autonomous weapons, argues that they are good thing. Autonomous weaponry, for Arkin, presents a real opportunity to enhance compliance with the laws of war.4 The philosopher Robert Sparrow, in contrast, opposes autonomous weapons. The deployment of these weapons, Sparrow thinks, creates ‘responsibility gaps’.5 These are situations in which no one can be held responsible for the use of force. In this book, and especially the third chapter, I want to subject these arguments to greater philosophical scrutiny. I also develop my own ethical approach to autonomous weapons. Taken together, this should give the reader a good idea of the ethical issues arising from autonomous weapons.
This chapter lays the foundation for my subsequent conceptual and ethical analysis of autonomous weapons. It proceeds as follows. In the second part of the chapter, I differentiate the debate on autonomous weapons from a number of related debates. This is necessary because issues from these different debates are often conflated with problems in the autonomous weapons debate. In the third part of the chapter, I provide background information on ethical approaches to armed conflict, most notably just war theory. I shall also make some preliminary observations about the implications of a just war approach for autonomous weapons and vice versa. In the fourth and final part of the chapter, I tackle two main criticisms of the just war approach in relation to autonomous weapons. I hope that the arguments in the fourth part of the chapter are useful to not just those interested in autonomous weapons but also those with a more general interest in the ethics of weapons research and development. Regrettably, the issue of weapons research has not been treated in much detail by ethicists. The points raised in the fourth part of the chapter should be seen as the starting point of a wider debate on the ethics of weapons research.

What This Book Is Not About

The autonomous weapons debate overlaps with a number of other debates, which I cannot tackle in detail here. Hence, I should be upfront about the topics this book does not discuss.

Autonomous Weapons and the Ethics of Cyber Warfare

The issue of machine autonomy and its use by the military is highly relevant in the cyber domain. One could imagine a software robot that can operate autonomously once it has been programmed. Such a robot could move from one computer to another without any further assistance from its programmer. It may also be capable of replicating its code while ‘infecting’ a computer. The cyber domain has recently received increased attention from militaries across the world, and cyberattacks are clearly perceived as a potential threat to national security by policymakers.
Notwithstanding the importance of the challenges posed by the use of machine autonomy in cyberspace, I shall not cover this topic in this book. Firstly, more philosophical work needs to be done on the conceptualisation of the cyber domain as a military domain. For this purpose alone, a separate book would be required. Secondly, it is not clear whether existing normative frameworks that have been developed in order to regulate military operations in the physical domains of air, land, and sea can be readily transferred to the cyber domain. Some think they can, others are more sceptical. Discussing these different frameworks in order to determine which best captures the distinctiveness of the cyber domain is beyond the scope of this work.
So, rather than asking whether we need to decide between different regulatory frameworks in order to adequately respond to the challenges posed by machine autonomy, I want to find out how autonomous weapons relate to established frameworks that regulate military activities, most notably just war theory. Hence, I shall focus on the production of autonomous weapons for, and their deployment in, the established defence domains of air, land, and sea. The use of autonomous weapons in these domains already raises a number of critical issues, so I refrain from opening an additional can of worms here.

Autonomous Weapons, Ethics, and ‘Super-Intelligence’

This book—though this might be disappointing to some—is not a work of science fiction. The debate on autonomous weapons overlaps considerably with that on AI. This is hardly surprising, given that autonomous weapons are made possible by advances in AI programming techniques. That said, the classic philosophical debate on AI, primarily in philosophy of mind, concerns the question whether machines can think and whether humans are such machines. As such, it has been divorced from any practical work in AI research as well as computer science. This book is concerned with the philosophical questions arising from the practical dimension of AI research, rather than the (future) practical questions arising from the philosophical dimension of the AI debate. In other words, I am interested in whether the availability of sophisticated AI programming techniques for military applications poses new ethical challenges, and whether those who develop these techniques are permitted to make their expertise available to the military.
This restriction of scope is especially important when it comes to the recent debate on (artificial) ‘super-intelligence’.6 The question of super-intelligence, while interesting and thought provoking, is largely irrelevant to this book. The starting point of that debate is the hypothesis that AI may develop capacities that outstrip those of its human creators. This means that, AI, under those circumstances, has the potential to become uncontrollable. It might, in fact, start to evolve in ways that are not only beyond human control but also possibly, and worryingly, beyond human understanding. Needless to say, in order to arrive at such a scenario, one needs to accept a number of ‘big ifs’. But even if one is critical of the assumptions built into the super-intelligence hypothesis, the achievement of super-intelligence, or something closely resembling it, may at least be a theoretical possibility.
My view, of the above, is that the super-intelligence hypothesis is interesting for the ethical debate on armed conflict but offers little by way of solving the problem of autonomous weapons. In a worst-case scenario where super-intelligence starts to evolve beyond human understanding and control, it is hard to see how it could be harnessed for military use. Rather, the worst-case scenario points to a conflict between super-intelligence and humanity. In this instance, humanity has other things to worry about than autonomous weapons. Compared to a worst-case scenario, an ideal scenario where super-intelligence can be effectively harnessed in order to accomplish human goals has positive repercussions for armed conflict. The question is whether super-intelligence could assist humanity in transcending some of the causes of armed conflict—be they cultural or material (energy, resources, etc.). Super-intelligence would represent a chance for a world without armed conflict. Regardless of whether one endorses an optimistic or pessimistic scenario, the arrival of super-intelligence would represent a true civilisational paradigm shift, and, in many ways, would force us to go back to the drawing board when considering the regulation of political structures as well as society in general. For the current debate on autonomous weapons, we can neglect the super-intelligence hypothesis.
Critics could reply that if the scope of the present inquiry is restricted in this way, autonomous weapons do not offer anything new. The philosopher Robert Sparrow might raise this point. Sparrow’s influential work on autonomous weapons (or Killer Robots, as he calls them) contends that autonomous weapons are only philosophically interesting insofar as their capacities are comparable to those of humans.7 Although Sparrow’s work precedes the super-intelligence debate by roughly a decade, there is a clear overlap with the super-intelligence hypothesis. The ‘Killer Robots’ in Sparrow’s paper are ‘human but not quite’. Crucially, for Sparrow, although their agency might approximate that of humans, it is not sufficient to hold ‘Killer Robots’ responsible for what they do. By contrast, if autonomous weapons do not approximate human agency, there is nothing, Sparrow thinks, new about these weapons. After all, since the advent of computers, the military has been using sophisticated algorithms and, as a result, many existing weapons systems rely on automated functions. In short, the ‘computerisation’ of warfare is not a new phenomenon.
On the one hand, I agree with Sparrow that if placed on a continuum with existing precision-guided weapons autonomous weapons, they are not unprecedented. On the other hand, I disagree with Sparrow’s claim that autonomous weapons, unless conceived along the lines of a super-intelligence scenario, are philosophically uninteresting.
Firstly, while philosophical work on armed conflict has proliferated over the last two decades or so, the normative repercussions of the computerisation and digitalisation of ...

Table of contents