CHAPTER ONE
Norms versus Practice in International Law and Ethics
What we commonly call âthe laws of warâ has two other names, used by professionals in the field: âinternational humanitarian lawâ and âthe law of armed conflictâ. Which name you use says a lot about how you think about the sources and purpose of law in the international system. Members of humanitarian organizations and many international lawyers prefer the name âinternational humanitarian lawâ. They date the modern emergence of this body of law to the efforts of Swiss businessman Henry Dunant, founder of the International Committee of the Red Cross (ICRC). Dunant convened the first meeting of the ICRC in 1863. He was inspired by the horrors he witnessed at the Battle of Solferino in northern Italy four years earlier. The committee was initially formed to aid wounded soldiers; it then expanded its scope to include sailors, and later, prisoners of war. As war became increasingly destructive to civilians, the ICRC embraced a broader mandate to extend protections to noncombatants as far as possible. The organization is considered the custodian of the Geneva Conventions and its role is explicitly recognized in the treaties.
The description âlaw of armed conflictâ is the preferred choice of military professionals whose historical reference point is the Lieber Code, issued in that same year of 1863. US President Abraham Lincoln commissioned a document formally titled Instructions for the Government of Armies of the United States in the Field, General Orders Number 100. It was intended to codify the existing military practice so that soldiers of the Union Army would abide by it. Its author, Francis (Frantz) Lieber, was a German-American jurist and professor at Columbia University who had fought in the Prussian Army against Napoleon. He supported the Union during the Civil War, even though he had lived for many years in South Carolina and his son died in 1862 fighting on the Confederate side.
The two ways of understanding the laws of war share much in common, starting with the basic distinction between ius ad bellum â the conditions under which resorting to war is considered legitimate â and ius in bello â the types of military strategies and weapons allowed and the treatment of prisoners of war and civilian noncombatants. This fundamental distinction originated with the Catholic Just War tradition centuries ago, but it reflects principles broadly shared across the worldâs religions. Perhaps more significantly, the ethical tradition of just war became the basis for modern legal restrictions on warfare to which all states are bound.
The United Nations Charter, for example, embodies the principles of ius ad bellum in its prohibition on the use of force except for reasons of self-defense or when it is authorized by the Security Council for the preservation of international peace and security. Here the principle of right authority comes into play. To justify the use of military force for self-defense, the proper authority is the state itself. This customary norm of international law is reinforced by the United Nations Charterâs reference (in Article 51) to âthe inherent right of individual or collective self-defenseâ that all UN member states enjoy. In situations that affect international peace and security but may not pose direct threats to member states which would justify individual military action in self-defense, the UN Security Council is considered the right authority to sanction the use of force, under Chapter VII of the Charter. The criterion of reasonable hope also falls within the scope of the ius ad bellum. It refers to the requirement that there be some realistic expectation that the goals of the military operation contemplated will be achieved. It would be difficult to justify harm done to innocent civilians, not to mention the deaths of a countryâs own soldiers (or even those of the other side), for a hopeless cause. Both right authority and reasonable hope figure prominently in legal and political discussions on such topical issues as âhumanitarian interventionâ and preventive war.1
Ius in bello principles are reflected in the various Hague and Geneva Conventions which constitute the laws of war and in treaties restricting the use of particular weapons or military practices. Within ius in bello, two principles loom large in importance: distinction (or discrimination) and proportionality, which, in turn, is linked to the doctrine of double effect. The principle of distinction requires that armed forces distinguish between military and civilian targets, attacking the former and seeking to avoid harming the latter. Double effect is an ethical concept, usually attributed to Thomas Aquinas, with applications to fields ranging from medicine to warfare. It focuses on the actorâs intentions in carrying out an act which can have both good and evil consequences (thus, a double effect). In war, killing civilians is considered evil, whereas destroying military facilities or killing soldiers is good. Many attacks produce both effects. Some harm to civilians is expected to occur in all armed conflicts, so killing civilians per se is not illegal or immoral according to just war theory. In order to satisfy the ethical criteria and to adhere to the laws of war, armed forces are not allowed to target civilians directly or to use the deaths of civilians as a means to an end, even if that end â victory â is good. But forswearing the deliberate intention of killing civilians is not enough to excuse or justify their deaths in a military engagement. There must be a reasonable judgment that the good effect â the military benefit â outweighs the evil effect of harm to civilians. That is where the principle of proportionality comes in. Here is a standard, concise definition: âThe loss of life and damage to property incidental to attacks must not be excessive in relation to the concrete and direct military advantage expected to be gained.â2 That this definition comes from a US Army field manual reinforces the point: Over the centuries, the ethical concepts made their way from just war theory into the body of modern international law, and from there into the rules of engagement which are supposed to govern military practice.
Despite common features, the divergent names for the laws of war imply different points of emphasis. One might hypothesize, for example, that the humanitarian approach reflected in international humanitarian law would privilege civilian welfare over optimal military performance, whereas the military professionalsâ understanding of the law of armed conflict would favor the exigencies of successful combat and âmilitary necessityâ over the protection of civilians. Like any dichotomy, this one is not fully accurate, but it does seem to reflect the basic tensions underlying the role of law and ethics in international politics, especially as they have surfaced in connection with the war on terror.
In 1939 Edward Hallett Carr, the British historian and analyst of international affairs, described a âfundamental divergenceâ in the understanding of international law âbetween those who regard law primarily as a branch of ethics, and those who regard it primarily as a vehicle of powerâ.3 More than six decades later, the Global War on Terror exposed this divergence as still the fundamental source of disagreement on the purpose of the laws of war. Consider the debate which has emerged over the question, as one observer put it, âWho owns the rules of war?â4 Proponents of expanding the scope of humanitarian protections endorse the likeminded efforts of nongovernmental and international organizations and states, particularly in Europe, which have favored limitations on certain weapons and strategies and have sought to hold governments accountable for abuses perpetrated against detainees suspected of conducting or planning terrorist acts. Advocates of this approach invoke pragmatic arguments for pursuing alleged war criminals and scofflaws â deterrence of future abuses, for example â but a sense of moral outrage seems an equally strong motivating force. In the war on terror, that outrage is directed not only against those who undertake terrorist acts but also against those who torture terrorist suspects or wreak havoc on innocent civilians, in an attempt to defeat insurgents.
The opposite position reflects greater concern about the military capabilities of states that are directly engaged in fighting terrorists worldwide and guerrilla insurgencies in places such as Afghanistan and Iraq. The rights of terrorist suspects and of civilians in conflict zones are secondary. Proponents of this position argue that the states that actually practice warfare should set the rules and should not be constrained by other states and organizations that have no direct involvement in military matters. Following Robert Kaganâs celebrated generalization that âAmericans are from Mars and Europeans are from Venusâ, they argue that the pacifically oriented states of the European Union, for example, should not determine the rules that define US military conduct.5 It is the United States, after all, that bears the lionâs share of the burden of defending the world from terrorism. Some suggest that European countries are cynically employing âlawfareâ to limit US power to their benefit. They find it incredible, for example, that the chair of the International Criminal Courtâs working group tasked with defining the crime of aggression should be the United Nations representative from Liechtenstein, a tiny country that disbanded its eighty-strong army in 1868 and remained neutral in both world wars. In response, defenders of the court might question why the nationality of the working groupâs chair should be expected to make any difference to the groupâs deliberations, when some 150 experts have been involved.6 And if nationality did make a difference, what better country to assure an even-handed approach than one that neither engaged in aggression nor suffered from it?
Norm Expansion and State Practice
Where the two opposing camps appear to find common ground is on the influence, to date, of states and organizations seeking to broaden the scope of civilian protections and to narrow the legitimate uses of force. They mostly agree that this influence has been extensive. Proponents of a greater role for individuals, organizations, and small states celebrate the emergence of something they call transnational or global civil society and the increasing ability of ânorm entrepreneursâ to shape the norms that govern international politics, including security policy. Opponents observe the same phenomena and decry the undue influence of unelected individuals, unaccountable organizations and irresponsible governments.
The reality appears to be more ambiguous than either perspective suggests. The rest of this chapter takes up four examples in order to identify the competing interpretations and illustrate the difficulty of drawing any firm conclusions about what ultimately is a political process. The examples come from domains that lawyers would typically separate into two categories, even if to a lay person their names sound rather similar: international humanitarian law and international human rights law. The first governs the (violent) behavior of states towards other states. The second governs the behavior of states towards individuals, and, in particular, towards their own citizens. The emergence of a body of international law that tells states how to treat their own citizens already represents the kind of expansion of norms on behalf of individuals and at the expense of state sovereignty which many observers associate with the second half of the twentieth century, and particularly the end of the Cold War. The main question that motivates this study is whether the Global War on Terror is likely to put a stop to this apparent trend; more broadly, what are the political dynamics that influence the evolution of international legal and ethical practices? To give a preview of the type of analysis taken up in the rest of the book, the remainder of this chapter introduces four areas where norm entrepreneurs have sought to influence state practice. These include efforts: (1) to limit the impact of war on civilians; (2) to stigmatize and outlaw the practice of torture; (3) to promote international justice for crimes against humanity and war crimes; and (4) to endorse military intervention for humanitarian purposes.
Protecting Civilians in War
Many observers have noted an increasing influence of non-state actors over the last decades, even in realms concerning the most fundamental sovereign prerogatives of states â defense and security, on the one hand, and the treatment of their own citizens, on the other.7 In the security domain, organizations such as the International Committee of the Red Cross and Human Rights Watch sought to limit the impact of war on civilians.
Perhaps the most striking example, in the years following the end of the Cold War, of the influence of âglobal civil societyâ â for both its admirers and detractors â was the campaign to ban landmines. The effort was spearheaded (to use an inappropriately bellicose metaphor) by nongovernmental organizations. In just five years, grassroots and transnational activists convinced a number of states to sponsor a process that resulted in the 1997 Ottawa Mine Ban Treaty to outlaw the production, sale, and deployment of antipersonnel mines.8
Skeptics of a realist bent (who focus on state prerogatives and concerns about security) would point out that the treatyâs signatories did not include the worldâs major producers of land-mines, which happened also to be some of the worldâs leading military powers: the United States, China, Russia. Absent from the list were also countries in particularly war-prone regions: Syria, Egypt, Israel, Iran, Iraq, Saudi Arabia, India, Pakistan.9 Is this, then, a case that defies the fears (of some) and the hopes (of others) that global civil society will increasingly constrain state prerogatives for making war? Is the mine ban, in fact, simply a feel-good measure with no real impact?
There are two counterarguments to that skeptical view: First, what realist would expect a treaty to come into force in the military sphere despite the opposition of the major states that engage in military operations? In the period 1999 to 2006, 151 countries became parties to the treaty, including Ukraine, whose arsenal of 6.7 million antipersonnel mines constituted the worldâs fourth largest. During that period nearly forty million stockpiled antipersonnel mines were destroyed, more than 1,100 square kilometers of land were cleared of more than four million antipersonnel mines and one million antivehicle mines, and donors contributed almost two billion dollars to the de-mining efforts.10 Second, the major powers, with the notable exception of Russia, had by and large abided by the treatyâs provisions despite their opposition to them. The United States, as of late 2007, had not used such mines since the 1991 Gulf War, had not exported them since 1992, and had not produced them since 1997. Moreover, the United States was the largest single donor to humanitarian mine-related programs, averaging about 100 million dollars a year for the fiscal years 2004 and 2005, for example.11
Only four governments had conducted new mine-laying operations since early 2003: Russia, Myanmar, Nepal, and Georgia. As the result of a policy review in February 2004, the United States was poised to violate the ban by producing new weapons, by continuing to stockpile old ones, and possibly by deploying landmines in Iraq.12 In September 2005, the New York Daily News reported that the Pentagon was close to making a decision to produce a new landmine. The Defense Department actually requested $1.3 billion for research and production of two new systems (between fiscal years 2005 and 2011). Although this was evidently a setback for efforts to âuniversalizeâ the treaty, the Pentagonâs behavior nevertheless reflected the influence of the anti-mine norms that produced the Ottawa accord. How so? As the newspaper put it, âunderscoring the unpopularity of the devices, defense officials working on the program, called Spider, decline to call the weapon a land mine, opting instead for generic descriptions such as ânetworked munitionsâ â.13 In 2002, the Pentagon agency known as the Project Manager for Mines, Countermine and Demolitions had already changed its name to Project Manager Close Combat Systems, to avoid the obvious association with a weapon that much of the world had declared illegal. So officials in the Pentagon apparently recognized a normative stigma against landmines, at least enough not to want to say out loud that they intended to produce new ones. Such an action would nevertheless be completely legal, because the United States never signed the Mine Ban Treaty. Reinforcing the stigma and bolstering the status of the Ottawa Mine Ban, US le...