(Tavani, 2018, p. 1; emphasis in original)
Throughout this work, each of these sub-questions will be answered to some extent. As advance warning, more effort will be expended to identify the kinds of robots that might deserve rights, establish the criterion for determining rights eligibility, assess the importance of agency in the calculation of moral consideration, and explain the rationale invoked to support the preceding arguments than to itemize specific rights that might be bestowed upon robots.
Framing the debate: Properties versus relations
Broadly speaking, ethicists, philosophers, and legal scholars have extensively debated the answer to the machine question, with some finding that robots might qualify for rights and others rejecting the possibility on jurisprudential, normative, or practical grounds. Both sides of the debate frame their positions chiefly in terms of either the properties of an intelligent machine or its relationship to other entities (Tavani, 2018, p. 2). This division has its roots in the philosophical concept known as the is/ought problem, articulated by Hume (1738/1980) in A Treatise of Human Nature. The problem, so to speak, occurs when a value-laden statement masquerades as a fact-based one; we treat something a certain way by virtue of how we think it ought to be treated, not by virtue of what it actually is. Therefore, the philosophical task of figuring out the moral status of an entity and how to act towards it necessarily involves understanding whether ought is derived from is or vice versa.1 More concretely, in the properties-based approach, the way we decide how to treat a robot (how we believe we ought to engage with it) depends on its characteristics (what it is). In the relational approach, the moment we enter into social relations with an entity, obligations towards it are established (how we ought to treat it) irrespective of the qualities that suggest its alterity (what it is).2 In the space here, I briefly summarize the thrust of these arguments with an eye towards more fully examining the relationship between these positions and cognate concepts such as personhood and rights, which I discuss in Chapter Two. As we shall see, the lively discussion about robot rights has suffered from an inattention to the relationship between key concepts, unacknowledged cultural biases, and challenges associated with tackling an interdisciplinary problem.
One camp consists of analysts who argue that robots do not or should not have rights, focusing mainly on the properties of such intelligent artifacts and, to a lesser extent, on the relational dimension of HRI. In one of the earlier works indicative of this perspective, Miller (2015) contends that what separates humans and animals from âautomataâ is the quality of âexistential normative neutralityâ (p. 378). Whereas the ontological status of humans and animals is taken for granted, the existence of automata is actively constructed by human agents. Confusingly, Miller writes about the connection between moral status and the eligibility for full human rights, by which he means the entire suite of legal rights expressed in major international human rights documents. In addition, he claims that âhumans are under no moral obligation to grant full human rights to entities possessing ontological properties critically different from them in terms of human rights basesâ (Miller, 2015, p. 387). This assertion nearly qualifies as a strawman argument. As shown below, those finding robot rights philosophically tenable do not advocate for the assignment of all major human rights to technological entities. Furthermore, conflating moral rights with legal rights overlooks the varied reasons why nonhumans might be and have been extended the latter kind of protection.
For Solaiman (2017), the question revolves around the extent to which robots can fulfill legal duties, which are âresponsibilities commanded by law to do or to forbear something for the benefit of others, the failure in, or disobedience of, which will attract a remedyâ (p. 159). Whereas corporations consist of people who can perform duties and idols have managers who tend to their legal interests, robots have no such human attachments. Therefore, since robots cannot fulfill legal duties, they cannot meet the criteria for legal personhood and thus they are not entitled to legal rights.
Bryson et al. (2017) rebuff the idea of granting either moral or legal rights to robots. They contend that robots do not possess the qualities intrinsic to moral patients (i.e., consciousness), so they cannot hold moral rights or be considered moral patients, making them ineligible for legal personhood, and thus not entitled to legal rights (pp. 283â4). Further, leaning on Solaiman, the authors urge that absent the ability to be held accountable for oneâs actions, an artificial entity cannot fulfill legal duties and therefore does not qualify as a legal person. This lack of accountability could result in âhumans using robots to insulate themselves from liability and robots themselves unaccountably violating human legal rightsâ (Bryson et al., 2017, p. 285).3 Neither of these outcomes advance the ultimate objective of an established legal orderââto protect the interests of the peopleâ (Bryson et al., 2017, p. 274; emphasis in original). In short, the costs of affording robots rights outweigh the benefits of doing so.
For Bryson (2018), robots should not be assigned the status of either moral agents or moral patients because doing so would place human interests in competition with the interests of artificial entities, which is unethical. Determining whether an entity qualifies as a moral patient or a moral agent is critical in establishing whether or not it possesses moral duties and/or moral rights. Bryson agrees with Solaiman that while humans have the power to assign legal duties and legal rights to any entity, these forms of recognition are only available to âagent[s] capable of knowing those rights and carrying out those dutiesâ (Bryson, 2018, p. 16). If a robot does not meet the criteria for either moral agency or moral patiency, it cannot hold moral rights.4 In fact, Bryson (2010) contends controversially, robots should be treated as mere slaves.5
More recently, Birhane and van Dijk (2020) adopt a âpost-Cartesian, phenomenological viewâ and conclude that ârobots are [not] the kinds of beings that could be granted or denied rightsâ (p. 2). Whereas all humans share a capacity for âlived embodied experienceâ (Birhane & van Dijk, 2020, p. 2), robots do not. Robots are technological artefacts that may contribute to the human experience, but they are merely elements present in the human social world, not beings unto themselves. As such, the authors take a relational approach to robot rights but reach a conclusion totally opposite from the one obtained by Coeckelbergh (2010, 2011, 2014) and Gunkel (2012, 2018a).6 Finally, instead of focusing on the rights of robots, the scholars suggest, we should concentrate our efforts on safeguarding human welfare, which is the ultimate reason for contemplating rights for AI anyway.
This article is logically flawed and deeply contradictory, rendering its arguments highly suspect. First, the very title of the piece frames the issue in terms of both a strawman argument and a false dichotomy. Robot rights are neither promoted solely as a means of advancing human welfare, nor are robot rights and human welfare mutually exclusive objectives. Second, their alleged employment of a post-Cartesian outlook is belied by their assessment that while robots are embedded in human social practices, they are still different enough from humans to warrant their exclusion from the moral circle. This move ignores the ontological flattening that occurs when viewing the moral universe as a social-relational whole. If, in fact, âtechnologies are always already part of ourselvesâ (Birhane & van Dijk, 2020, p. 3; emphasis in original), there is no basis for the kind of ontological separation described by Descartes. In short, the authors fail to present a convincing case for the dismissal of robot rights.
Another camp comprises those writers who maintain that robots could conceivably possess rights, exploring the possibilities generated by the properties of such entities, their relationship with humans and the larger context in which they operate, or a combination of the two. The justifications supplied by these advocates are mostly philosophical, but a few are explicitly leg...