Rights for Robots
eBook - ePub

Rights for Robots

Artificial Intelligence, Animal and Environmental Law

Joshua C. Gellers

Condividi libro
  1. 172 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

Rights for Robots

Artificial Intelligence, Animal and Environmental Law

Joshua C. Gellers

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

Bringing a unique perspective to the burgeoning ethical and legal issues surrounding the presence of artificial intelligence in our daily lives, the book uses theory and practice on animal rights and the rights of nature to assess the status of robots.

Through extensive philosophical and legal analyses, the book explores how rights can be applied to nonhuman entities. This task is completed by developing a framework useful for determining the kinds of personhood for which a nonhuman entity might be eligible, and a critical environmental ethic that extends moral and legal consideration to nonhumans. The framework and ethic are then applied to two hypothetical situations involving real-world technology—animal-like robot companions and humanoid sex robots. Additionally, the book approaches the subject from multiple perspectives, providing a comparative study of legal cases on animal rights and the rights of nature from around the world and insights from structured interviews with leading experts in the field of robotics. Ending with a call to rethink the concept of rights in the Anthropocene, suggestions for further research are made.

An essential read for scholars and students interested in robot, animal and environmental law, as well as those interested in technology more generally, the book is a ground-breaking study of an increasingly relevant topic, as robots become ubiquitous in modern society.

The Open Access version of this book, available at http://www.taylorfrancis.com/books/e/ISBN, has been made available under a Creative Commons Attribution-Non Commercial-No Derivatives 4.0 license.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Rights for Robots è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Rights for Robots di Joshua C. Gellers in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Computer Science e Artificial Intelligence (AI) & Semantics. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Editore
Routledge
Anno
2020
ISBN
9781000264593

1 Rights for robots

Making sense of the machine question
Sometimes I Forget You’re a Robot
(Sam Brown, 2013)
Most of the literature on the ethical dimensions of robots concerns at least one of the five following areas: (1) human actions completed through the use of robots, (2) the moral standing of robots, (3) the behavior of robots, (4) the ethical implications of introducing robots into social or occupational spaces, and (5) self-reflection by scholars regarding the impact of robots on their field of study (Steinert, 2014, p. 250). In this book, I am primarily interested in contributing to the second area of inquiry listed above (along with its analog in the legal domain), although this is not to diminish the importance of any of the other ethical issues raised by robots and their application in human endeavors. For instance, there is exciting and important research being conducted on the ethics of drone warfare (i.e., Enemark, 2013), how robots deployed in nursing homes act towards the elderly (i.e., Sharkey & Sharkey, 2012), the effects of using robots in the classroom on teachers and children (i.e., Serholt et al., 2017), ethical considerations in the design of robots used for love or sex (i.e., Sullins, 2012), and the ethical conduct of scholars working on human–robot interaction (HRI) (i.e., Riek & Howard, 2014). The point here is that the discussion regarding the field of “roboethics” (Veruggio & Operto, 2006, p. 4) is far more complicated and multi-faceted than is suggested by the narrow slice entertained in this work. We have come a long way from Asimov’s (1942) three laws of robotics, which exclusively prescribed ethical directives intended to govern robot behavior.
The present text focuses on the moral and legal standing of robots, and seeks to develop a response to the following question—can robots have rights? This line of inquiry necessarily entails five separate, albeit related, sub-questions:
(i) Which kinds of robots deserve rights? (ii) Which kinds of rights do these (qualifying) robots deserve? (iii) Which criterion, or cluster of criteria, would be essential for determining when a robot could qualify for rights? (iv) Does a robot need to satisfy the conditions for (moral) agency in order to qualify for at least some level of moral consideration? (v) Assuming that certain kinds of robots may qualify for some level of moral consideration, which kind of rationale would be considered adequate for defending that view?
(Tavani, 2018, p. 1; emphasis in original)
Throughout this work, each of these sub-questions will be answered to some extent. As advance warning, more effort will be expended to identify the kinds of robots that might deserve rights, establish the criterion for determining rights eligibility, assess the importance of agency in the calculation of moral consideration, and explain the rationale invoked to support the preceding arguments than to itemize specific rights that might be bestowed upon robots.

Framing the debate: Properties versus relations

Broadly speaking, ethicists, philosophers, and legal scholars have extensively debated the answer to the machine question, with some finding that robots might qualify for rights and others rejecting the possibility on jurisprudential, normative, or practical grounds. Both sides of the debate frame their positions chiefly in terms of either the properties of an intelligent machine or its relationship to other entities (Tavani, 2018, p. 2). This division has its roots in the philosophical concept known as the is/ought problem, articulated by Hume (1738/1980) in A Treatise of Human Nature. The problem, so to speak, occurs when a value-laden statement masquerades as a fact-based one; we treat something a certain way by virtue of how we think it ought to be treated, not by virtue of what it actually is. Therefore, the philosophical task of figuring out the moral status of an entity and how to act towards it necessarily involves understanding whether ought is derived from is or vice versa.1 More concretely, in the properties-based approach, the way we decide how to treat a robot (how we believe we ought to engage with it) depends on its characteristics (what it is). In the relational approach, the moment we enter into social relations with an entity, obligations towards it are established (how we ought to treat it) irrespective of the qualities that suggest its alterity (what it is).2 In the space here, I briefly summarize the thrust of these arguments with an eye towards more fully examining the relationship between these positions and cognate concepts such as personhood and rights, which I discuss in Chapter Two. As we shall see, the lively discussion about robot rights has suffered from an inattention to the relationship between key concepts, unacknowledged cultural biases, and challenges associated with tackling an interdisciplinary problem.
One camp consists of analysts who argue that robots do not or should not have rights, focusing mainly on the properties of such intelligent artifacts and, to a lesser extent, on the relational dimension of HRI. In one of the earlier works indicative of this perspective, Miller (2015) contends that what separates humans and animals from “automata” is the quality of “existential normative neutrality” (p. 378). Whereas the ontological status of humans and animals is taken for granted, the existence of automata is actively constructed by human agents. Confusingly, Miller writes about the connection between moral status and the eligibility for full human rights, by which he means the entire suite of legal rights expressed in major international human rights documents. In addition, he claims that “humans are under no moral obligation to grant full human rights to entities possessing ontological properties critically different from them in terms of human rights bases” (Miller, 2015, p. 387). This assertion nearly qualifies as a strawman argument. As shown below, those finding robot rights philosophically tenable do not advocate for the assignment of all major human rights to technological entities. Furthermore, conflating moral rights with legal rights overlooks the varied reasons why nonhumans might be and have been extended the latter kind of protection.
For Solaiman (2017), the question revolves around the extent to which robots can fulfill legal duties, which are “responsibilities commanded by law to do or to forbear something for the benefit of others, the failure in, or disobedience of, which will attract a remedy” (p. 159). Whereas corporations consist of people who can perform duties and idols have managers who tend to their legal interests, robots have no such human attachments. Therefore, since robots cannot fulfill legal duties, they cannot meet the criteria for legal personhood and thus they are not entitled to legal rights.
Bryson et al. (2017) rebuff the idea of granting either moral or legal rights to robots. They contend that robots do not possess the qualities intrinsic to moral patients (i.e., consciousness), so they cannot hold moral rights or be considered moral patients, making them ineligible for legal personhood, and thus not entitled to legal rights (pp. 283–4). Further, leaning on Solaiman, the authors urge that absent the ability to be held accountable for one’s actions, an artificial entity cannot fulfill legal duties and therefore does not qualify as a legal person. This lack of accountability could result in “humans using robots to insulate themselves from liability and robots themselves unaccountably violating human legal rights” (Bryson et al., 2017, p. 285).3 Neither of these outcomes advance the ultimate objective of an established legal order—“to protect the interests of the people” (Bryson et al., 2017, p. 274; emphasis in original). In short, the costs of affording robots rights outweigh the benefits of doing so.
For Bryson (2018), robots should not be assigned the status of either moral agents or moral patients because doing so would place human interests in competition with the interests of artificial entities, which is unethical. Determining whether an entity qualifies as a moral patient or a moral agent is critical in establishing whether or not it possesses moral duties and/or moral rights. Bryson agrees with Solaiman that while humans have the power to assign legal duties and legal rights to any entity, these forms of recognition are only available to “agent[s] capable of knowing those rights and carrying out those duties” (Bryson, 2018, p. 16). If a robot does not meet the criteria for either moral agency or moral patiency, it cannot hold moral rights.4 In fact, Bryson (2010) contends controversially, robots should be treated as mere slaves.5
More recently, Birhane and van Dijk (2020) adopt a “post-Cartesian, phenomenological view” and conclude that “robots are [not] the kinds of beings that could be granted or denied rights” (p. 2). Whereas all humans share a capacity for “lived embodied experience” (Birhane & van Dijk, 2020, p. 2), robots do not. Robots are technological artefacts that may contribute to the human experience, but they are merely elements present in the human social world, not beings unto themselves. As such, the authors take a relational approach to robot rights but reach a conclusion totally opposite from the one obtained by Coeckelbergh (2010, 2011, 2014) and Gunkel (2012, 2018a).6 Finally, instead of focusing on the rights of robots, the scholars suggest, we should concentrate our efforts on safeguarding human welfare, which is the ultimate reason for contemplating rights for AI anyway.
This article is logically flawed and deeply contradictory, rendering its arguments highly suspect. First, the very title of the piece frames the issue in terms of both a strawman argument and a false dichotomy. Robot rights are neither promoted solely as a means of advancing human welfare, nor are robot rights and human welfare mutually exclusive objectives. Second, their alleged employment of a post-Cartesian outlook is belied by their assessment that while robots are embedded in human social practices, they are still different enough from humans to warrant their exclusion from the moral circle. This move ignores the ontological flattening that occurs when viewing the moral universe as a social-relational whole. If, in fact, “technologies are always already part of ourselves” (Birhane & van Dijk, 2020, p. 3; emphasis in original), there is no basis for the kind of ontological separation described by Descartes. In short, the authors fail to present a convincing case for the dismissal of robot rights.
Another camp comprises those writers who maintain that robots could conceivably possess rights, exploring the possibilities generated by the properties of such entities, their relationship with humans and the larger context in which they operate, or a combination of the two. The justifications supplied by these advocates are mostly philosophical, but a few are explicitly leg...

Indice dei contenuti