This chapter will describe the impact that the introduction of autonomous drones to the battlespace may bring to efforts to hold individuals responsible for violations of the laws of war and gross violations of International Human Rights Law, such as crimes against humanity. 1 It will review theories of criminal responsibility that may be applicable when autonomous drones are employed to perpetrate crimes. I argue that the use of kinetic autonomous drones should not necessarily produce unacceptable âaccountability gaps.â However, an âaccountability dichotomyâ may develop between states with the resources to field autonomous drones that can communicate and record all relevant military data and less sophisticated states and armed groups that do not have access to this technology. Moreover, as the speed and military advantage of autonomous technology increases, a danger exists that drone operators and their commanders will gradually cede their human control and supervision over such weapons. In that scenario, proving individual criminal responsibility for incidents involving autonomous drones will be more challenging. For the purposes of this chapter, the term âautonomous dronesâ refers to airborne weapon platforms that have the capability to make and execute targeting decisions without human supervision (U.S. Department of Defense 2012: 13â14).
1.1 Autonomous Drones and Challenges for Commanders
Under international law, the existence of individual criminal responsibility often turns on the intent or mens rea of the accused. Deployment of an autonomous weapon system, however, suggests a far more complex mental process, than, for example, firing a pistol at the enemy or the deployment of a unit of human-operated drones in the battlespace. The soldier who aims his pistol at a human target (1) will understand the capability of that weapon to injure or kill the target and (2) will âknow,â to the best of her ability, whether the target is an enemy combatant or a civilian. Similarly, the General who deploys a fleet of drones into a battlespace must consider, inter alia, (1) the range, accuracy, and explosive power of the droneâs weaponry to be directed at the enemy, (2) the possible presence of civilians in the area, and (3) the levels of training, stamina, discipline, and supervision of the droneâs human pilots, including their ability to comply with International Humanitarian Law (IHL). 2 Thus, the General who deploys human drone crews must understand and have confidence in the ability of those soldiers to think about the changing circumstances of their battlespace and to make appropriate judgments consistent with their mission.
Let us assume, however, that in 2065 the General deploys a brigade of autonomous drones. These robotic airborne vehicles, possessing great firepower, would have sufficient artificial intelligence to locate and identify targets that have been programmed into their computer software and destroy those targets, while respecting the rules of IHL and any pertinent rules of engagement. The option to deploy unmanned aircraft without human pilots and gunners on board could save many lives.
Nevertheless, this option also would increase the demands on the Generalâs already taxed mental capacity. Prior to such deployment, the General must consider (1) the first two points previously mentioned vis-Ă -vis human-operated drones, (2) the autonomous droneâs ability to comply with IHL in the particular (fluid or not) battlespace, (3) whether the mission or the expected circumstances of the battlespace may require the exercise of increased levels of human supervision and control over the robotic airborne vehicles (i.e. restrictions on autonomy), (4) whether the General and/or her staff will have the capacity to deactivate immediately the autonomous drone should conditions require it, (5) the robustness of the software that operates the artificial intelligence of the autonomous drone, in particular whether enemy forces may have the ability to tamper with and/or take control over the autonomous drone(s), (6) the level of training â technical, operational and with respect to the laws of war â of the human âoperatorsâ or monitors of the autonomous weapon systems (if any), and (7) whether the General and/or her staff will be able to monitor the rapid communications that will occur between autonomous drones and intervene when necessary. This last challenge will be particularly complex if the autonomous drones are deployed as a form of âswarmâ technology, i.e. large numbers of autonomous mobile weapon platforms that communicate amongst themselves and adjust their behaviour â individually and as a âswarmâ â when necessary to achieve the mission. 3
Should one or more of the autonomous drones attack and kill civilians, on what basis might individual criminal responsibility accrue to the General who ordered their deployment? If the General intentionally deployed the autonomous drones with the knowledge that they were deficient vis-Ă -vis one or more of the targeting rules mentioned in Articles 48â59 of Additional Protocol I (Geneva Conventions Additional Protocol I 1977) or with deliberate indifference to the existence of such deficiencies, arguably the General conducted an indiscriminate attack for which she should be held accountable [Geneva Conventions Additional Protocol I 1977: Art. 51(4), Art. 85(3)(b)].
From a legal perspective, more complex tasks and/or more limited autonomous technology will signal a demand for greater levels of human judgement (as well as communication and accountability) during the mission. Therefore, a sound understanding of the function, capabilities, and limitations of the semi-autonomous and autonomous weapon technologies available to armed forces will become a prerequisite for the command of modern military units. Increased autonomy will enable higher speeds of military engagement, resulting in greater military advantages for the armies that field them. In parallel, considerations of military advantage mean that the increasing speed with which future weapon systems will absorb, process and transmit information and react to events may influence decisions about acceptable levels of human judgement and permissible levels of autonomy (Jackson 2014).
1.2 Implications of the Inevitable Employment of Increasingly Autonomous Drones
The potential military advantages for states and organized armed groups in possession of autonomous weapon technologies and capabilities will lead, inevitably, to the increasing use of autonomous drones (Corn 2014). However, to ensure compliance with IHL, the intervention of human judgement on the activity of autonomous drones will be required at four distinct stages of military operations: (1) the procurement/acquisition stage, (2) the planning stage of the mission or attack when a human must choose which weapon system to employ (systems will vary across a range of autonomy), (3) following the choice of an autonomous drone, a decision as to the level of human attention â if any â to assign to the system during the mission but prior to an attack, and (4) specific inputs of human judgement â if necessary â to comply with international legal obligations and/or political interests, immediately before, during, and after the attack. During phases 2â4, human control over the autonomous drone should remain until the human supervisor ensures, at each of the final three stages, that the weapon system com-plies and will continue to comply with international law and applicable rules of engagement (Saxon 2014).
When accidents or crimes occur during and/or due to the use of autonomous drones, the actual degree of autonomy and the levels of human supervisory control over the machine must form part of any accountability analysis. Once the speed of autonomous technology reaches levels that preclude effective human supervision and control, however, proof of the existence of the mens rea 4 may be illusory and/or impossible to establish. Arguably, this could result in an accountability gap as the underlying rationale for the mens rea requirement in criminal law is that a sense of personal blame is absent if the accused did not in some way intend her action or omission (Supreme Court of Canada R v. Finta 1994: 760).