Should We Ban Killer Robots?
eBook - ePub

Should We Ban Killer Robots?

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Should We Ban Killer Robots?

About this book

Images of killer robots are the stuff of science fiction – but also, increasingly, of scientific fact on the battlefield. Should we be worried, or is this a normal development in the technology of war?

In this accessible volume ethicist Deane Baker cuts through the confusion over whether lethal autonomous weapons – so-called killer robots – should be banned. Setting aside unhelpful analogies taken from science fiction, Baker looks instead to our understanding of mercenaries (the metaphorical 'dogs of war') and weaponized animals (the literal dogs of war) to better understand the ethical challenges raised by the employment of lethal autonomous weapons (the robot dogs of war). These ethical challenges include questions of trust and reliability, control and accountability, motivation and dignity. Baker argues that, while each of these challenges is significant, they do not – even when considered together – justify a ban on this emerging class of weapon systems.

This book offers a clear point of entry into the debate over lethal autonomous weapons – for students, researchers, policy makers and interested general readers.

Trusted by 375,005 students

Access to over 1 million titles for a fair monthly price.

Study more efficiently using our study tools.

Information

1
Of War Dogs, Bat Bombs, Mercenaries and Killer Robots

‘Shall I compare thee to a summer’s day?’ muses William Shakespeare as he reflects on the qualities of his beloved in the opening line of Sonnet 18. Plato, seeking to understand the nature of reality itself and our place in it, creates the famous allegory of a group of people chained in a fire-lit cave and watching shadows on the wall. These examples illustrate that, when we are wrestling with something that is challenging to express or comprehend, it is natural for us to reach for analogies. We look to other things that are relevantly similar, in the hope that they will help us to grasp the thing we are trying to understand. That can be very useful and we can learn a lot that way, but only if the analogy is well chosen (Shakespeare wouldn’t have done particularly well if he’d decided to compare his beloved to a bit of pocket lint!).
While they have important antecedents, the lethal autonomous weapons systems (LAWS) that are on the near horizon (or, in some cases, already in production) are a new and challenging phenomenon, and so we naturally find ourselves looking for analogies that may help us to understand how to respond to them. Unfortunately, we often reach for unhelpful and unedifying analogies. Most common are analogies drawn from science fiction, whether it be the Schwarzenegger-shaped Terminator or the creepily measured tones of HAL 9000 from the classic movie 2001: A Space Odyssey. The year 2001 was a long time ago, but we are still nowhere near having AI reach the level of general AI that could enable ‘the machines’ to turn against their human masters. The issue at hand is weapons that are autonomous – many of which are likely to be driven by relatively pedestrian algorithms – rather than weapons that have achieved a level of intelligence that matches or surpasses that of human beings (what is usually referred to as ‘general AI’). There are certainly good reasons to be very cautious about developing weapons that incorporate general AI, but it’s important to see that the questions to be asked are different from those we need to answer about autonomy in general. For one thing, if a system is that intelligent, then we will have reached the point at which we need to decide whether the AI is capable of being held ethically and legally responsible for its actions. By contrast, as we will see, one of the main concerns about merely autonomous weapons is precisely that it is widely agreed that they cannot be held accountable, which opponents argue may leave an accountability gap.
So, if we are not to look to sci-fi for analogies that may help us to come to grips with the ethical issues associated with LAWS, what other options are available? A mostly overlooked but far more useful comparison is with animals.

The Dogs (and Horses, and Dolphins, and Pigeons, and Bats) of War

Animals have long been ‘weaponized’ (to use a term currently in vogue). The horses ridden by armoured knights in the Middle Ages were not mere transport. They were instead an integral part of the weapons system – they were taught to bite and kick, and the enemy was as likely to be trampled by the knight’s horse as to taste the steel of his sword. There have been claims that US Navy dolphins ‘have been trained in attack-and-kill missions since the Cold War’ (Townsend 2005), though this has been strongly denied by official sources. Even more bizarrely, during the Second World War the noted behaviourist B. F. Skinner led an effort to develop a pigeon-controlled guided bomb, a precursor to today’s guided anti-ship missiles. Using operant conditioning techniques, a pigeon housed within the weapon (which was essentially a steerable glide bomb) was trained to recognize an image of an enemy ship projected onto a small screen by lenses in the warhead. Should the image shift from the centre of the screen, the pigeon was trained to peck at the controls, which would adjust the bomb’s steering mechanism, thereby putting it back on target. Despite what Skinner reports to have been a project of considerable promise, Project Pigeon (or Project ORCON – from ‘organic control’, as it became known after the war) was cancelled, largely as a result of improvements in electronic means of missile control (Skinner 1960).
The strangeness of Project Pigeon is matched or even exceeded by another Second World War initiative: Project X-Ray. Conceived by Lytle S. Adams, a dental surgeon and an acquaintance of First Lady Eleanor Roosevelt, this was an effort to weaponize bats. The plan was to attach small incendiary devices to Mexican free-tailed bats and airdrop them over Japanese cities. It was intended that, on release from their bomb-shaped delivery system, the bats would disperse and roost in eaves and attics, among the traditional wood-and-paper Japanese buildings. Once ignited by a small timer, the napalm-based incendiary would then start a fire that was expected to spread rapidly. The project was cancelled as efforts to develop the atomic bomb gained priority, but not before one accidental release of some ‘armed’ bats resulted in a fire at a US base that burned both a hangar and a general’s car (Madrigal 2011).
The animals most commonly used as weapons, though, are probably dogs. An early example comes from the mid-seventh century BC, when the basic tactical unit of mounted forces from the Greek polis of Magnesia on the Maeander (present-day Ortaklar in Turkey) consisted of a horseman, a spear bearer and a war dog. It was recorded that the Magnesians’ approach during their war against the Ephesians was to first release the dogs, who would break up the enemy ranks, then follow that up with a rain of spears, and finally complete the attack with a cavalry charge (Foster 1941, 115). Today, of course, dogs continue to play important military roles. They are trained and used as sentries and trackers, to detect mines and IEDs, and for crowd control. For the purposes of this book, though, it is the ‘combat assault dogs’ that accompany and support special operations forces that are of the greatest relevance.
These dogs are usually equipped with bodymounted video cameras and are trained to enter buildings and seek out the enemy. This enables the dog handler to reconnoitre enemy-held positions without in the process putting soldiers’ lives at risk. New technologies are also being developed to enhance the human–dog team. For example, in October 2020 the US military announced that it was working with Command Sight Inc. to develop augmented reality glasses for military dogs. According to the press release, ‘[t]he augmented reality goggles are specially designed to fit each dog with a visual indicator that allows the dog to be directed to a specific spot and react to the visual cue in the goggles. The handler can see everything the dog sees to provide it commands through the glasses’ (US Army 2020).
In addition to serving as a means of reconnaissance, combat assault dogs are also trained to attack anyone they discover who is armed (Norton-Taylor 2010). The dog itself is not usually responsible for killing the enemy combatant, instead it works to enable the soldiers it accompanies to employ lethal force; we might think of the dog as part of a lethal combat system. But at least one unconfirmed recent report indicates that it may not always be the case that the enemy is not directly killed by the combat assault dog. According to a newspaper report, in 2018 a British combat assault dog was part of a UK SAS patrol in northern Syria, when the patrol was ambushed. A source quoted in the report gave the following account:
The handler removed the dog’s muzzle and directed him into a building from where they were coming under fire. They could hear screaming and shouting before the firing from the house stopped. When the team entered the building they saw the dog standing over a dead gunman…. His throat had been torn out and he had bled to death … There was also a lump of human flesh in one corner and a series of blood trails leading out of the back of the building. The dog was virtually uninjured. The SAS was able to consolidate their defensive position and eventually break away from the battle without taking any casualties. (Martin 2018)
Are there any ethical issues of concern relating to the employment of dogs as weapons of war? I know of no published objections in this regard, beyond concerns for the safety and well-being of the dogs themselves,1 which – given that the well-being of autonomous weapons is not an issue of interest here2 – is not the sort of objection of relevance to this book. That, of course, is not to say that there are no ethical issues that might be raised here. I shall return to this question in what follows, in drawing a comparison between dogs, contracted combatants and autonomous weapons. First, though, I turn to consider what seems to me to be another useful analogy for LAWS, namely mercenaries or contracted combatants.

The ‘Dogs of War’

In Just Warriors, Inc: The Ethics of Privatized Force (Baker 2011), I set out to explore the ethical objections to the employment of private military and security contractors in contemporary conflict zones. Are they mercenaries, and, if so, what is it about mercenarism that is ethically objectionable? Certainly, the term ‘mercenary’ has predominantly pejorative connotations, which is why I chose to employ the neutral phrase ‘contracted combatants’ in my exploration, so as not to prejudge its outcome. Other common pejoratives for contracted combatants are ‘whores of war’ and ‘dogs of war’. While ‘whores of war’ provides a fairly obvious clue to one of the normative objections to contracted combatants (to be discussed later), I did not address the pejorative ‘dogs of war’ in the book simply because I was unable at the time to identify any meaningful ethical problem associated with it. Perhaps, however, the analogy is a better fit than I then realized, as will become clear in the next pages. In what follows I outline the main arguments that emerged from my exploration in Just Warriors, Inc.
Perhaps the earliest thinker to explicitly address the issue of what makes contracted combatants ethically problematic is Niccolò Machiavelli, in The Prince. Two papers addressing the ethics of contracted combatants, one written by Anthony Coady (1992) and another by Tony Lynch and Adrian Walsh (2000), both take Machiavelli’s comments as their starting point. According to these authors, Machiavelli’s objections to mercenaries were effectively threefold:
  1. Mercenaries are not sufficiently bloodthirsty.
  2. Mercenaries cannot be trusted because of the temptations of political power.
  3. There exists some motive or motives appropriate for engaging in war that mercenaries necessarily lack, or else mercenaries are motivated by some factor that is inappropriate for engaging in war.
The first of these points need not detain us long, for it is quite clear that, even if the empirically questionable claim that mercenaries lack the killing instinct necessary for war were true, this can hardly be considered a moral failing. But perhaps the point is instead one about effectiveness, the claim being that the soldier for hire cannot be relied upon to do what is necessary in battle when the crunch comes. But, even if this claim is true, it is evident that it cannot be the moral failing we are looking for either. For, while we might cast moral aspersions on such a mercenary, those aspersions would be in the family of such terms as ‘feeble’, ‘pathetic’ or ‘hopeless’. But these are clearly not the moral failings we are looking for in trying to discover just what is wrong with being a mercenary. Indeed, the flip side of this objection seems to have more bite – namely the concern that mercenaries may be overly driven by ‘killer instinct’, that they might take undue pleasure in the business of causing death. This foreshadows the motivation objection discussed later.
Machiavelli’s second point is even more easily dealt with. For it is quite clear that the temptation to grab power over a nation by force is at least as strong for national military forces as it is for mercenaries. It could even be argued that mercenaries are more reliable in this respect. For e...

Table of contents

  1. Cover
  2. Series Page
  3. Title Page
  4. Copyright
  5. Acknowledgements
  6. Introduction
  7. 1 Of War Dogs, Bat Bombs, Mercenaries and Killer Robots
  8. 2 Trust, Trustworthiness and Reliability
  9. 3 Control and Accountability
  10. 4 Motives and Dignity
  11. Conclusion: So Then, Should We Ban Killer Robots?
  12. References
  13. End User License Agreement

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn how to download books offline
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 990+ topics, we’ve got you covered! Learn about our mission
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more about Read Aloud
Yes! You can use the Perlego app on both iOS and Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Yes, you can access Should We Ban Killer Robots? by Deane Baker in PDF and/or ePUB format, as well as other popular books in Philosophy & Political Philosophy. We have over one million books available in our catalogue for you to explore.