Personhood and Social Robotics
eBook - ePub

Personhood and Social Robotics

A psychological consideration

  1. 214 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Personhood and Social Robotics

A psychological consideration

About this book

An exponentially growing industry, human robot interaction (HRI) research has drawn predominantly upon psychologists' descriptions of mechanisms of face-to-face dyadic interactions. This book considers how social robotics is beginning unwittingly to confront an impasse that has been a perennial dilemma for psychology, associated with the historical 'science vs. art' debate. Raya Jones examines these paradigmatic tensions, and, in tandem, considers ways in which the technology-centred discourse both reflects and impacts upon understanding our relational nature.

Chapters in the book explore not only how the technology-centred discourse constructs machines as us, but also how humans feature in this discourse. Focusing on how the social interaction is conceptualised when the human-robot interaction is discussed, this book addresses issues such as the long-term impact on persons and society, authenticity of relationships, and challenges to notions of personhood. By leaving aside terminological issues, Jones attempts to transcend ritual of pitching theories against each other in order to comprehensively analyse terms such as subjectivity, self and personhood and their fluid interplay in the world that we inhabit.

Personhood and Social Robotics will be a key text for postgraduate students, researchers and scholars interested in the connection between technology and human psychology, including psychologists, science and technology studies scholars, media studies scholars and humanists. The book will also be of interest to roboticists and HRI researchers, as well as those studying or working in areas of artificial intelligence and interactive technologies more generally.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Personhood and Social Robotics by Raya A Jones,Raya Jones in PDF and/or ePUB format, as well as other popular books in Psychology & History & Theory in Psychology. We have over one million books available in our catalogue for you to explore.

Information

Chapter 1
Problematizing personhood differently
What is a person? T. R. Miles (1957) tackled this philosophical issue imaginatively. He invited the reader to imagine homo mechanisma: a flesh and blood machine that is capable of producing exactly the same responses we would expect of a human being. Psychologists could measure its IQ and psychiatrists could determine its psychopathology. It would show affection if people were kind to it, anger and dismay if people tried to cheat it, and due appreciation when confronted with a beautiful poem or a beautiful sunset. On what grounds should we deny the machine the status of personhood? This thought experiment may throw light on how ‘the concepts “man”, “machine”, “mind” and “consciousness” … function at present’ (ibid: 278). I open with Miles’s little known essay because the present study seeks to do something roughly similar. It interrogates social robotics to see how concepts such as person and robot, human and machine, and subjectivity or the self, operate at present. Technology-wise the present day is quite different from the 1950s, when Miles contrived homo mechanisma—a decade that was launched with Alan Turing’s (1950) seminal paper, which famously set a benchmark for artificial intelligence (AI). The Turing Test is premised on the extent to which people interacting with an AI are fooled into believing that they are interacting with another human. Likewise the premise of Miles’s thought experiment is that people will be completely fooled by the machine. Today, caricatured humanoid robots such as ASIMO (Honda), QRIO (Sony), NAO (Aldebaran Robotics) and iCub (developed as part of the EU project RobotCub) are not designed to fool anyone. Whereas Miles sought to pinpoint existing criteria that may prevent assimilating homo mechanisma into humanity, some roboticists and scientists close to the engineering field call for altering our existing criteria of personhood so as to accommodate the machine. The field’s goal ought to be defined in terms of ‘the creation of an artificial person, while defining person with language that is free of … the presumption of human superiority,’ say MacDorman and Cowley (2006: 378). This view is not promoted by everyone in robotics, but its expression even by a few evinces how conventional conceptions of personhood are being challenged.
Nowadays arguments similar to those delineated by Miles are raised with an entirely different sense of urgency. The imagined object is no longer ‘a subject of fantastic speculation rather than of practical possibility’ (Miles 1957: 278), but is regarded as an inevitable technological outcome. Citing the UN Declaration of Rights, Torrance (2008: 501) asked, ‘are there any circumstances under which it might be morally appropriate for us to consider extending such rights specifically to humanoids that are taken to be devoid of phenomenal consciousness?’ The question is rhetorical in the context of his argument, but it is not a thought experiment. It attests to problematizing personhood for a different reason than did Miles. Whereas Miles built his ontological argument around a hypothetical classification dilemma, Torrance points to an issue in applied ethics that will become real (at least according to those who raise it). Miles sought to establish logical criteria for defining personhood—how to categorize person/not-person, thereby to answer what makes us ‘special’. Torrance endorses the ‘organic view’ (his phrase) according to which AIs are unlikely to possess sentience, hence will lack the kind of empathic rationality that is necessary for being a moral agent. The two texts are separated by half a century during which an old fantasy has taken a decisive twist. It has turned from a playful as-if to a realistic what-if.
The subjectivity paradox
Back in the day when humanoid robots were confined to fiction, Miles (1957) dismissed criteria such as natural birth and consciousness as reasons for denying homo mechanisma the status of a person. He proposed that the machine’s lack of a body-schema is a sufficient reason. Irrespective of how cogent is this criterion as far as philosophical arguments go, and aside from the technology’s advances towards giving a body schema to artificial systems (e.g. Sturm et al. 2009), it is germane here that Miles (a psychologist) invoked our capacity to be reflexively aware of our own embodiment—in a word, subjectivity—as the irrefutable hallmark of personhood.
The invocation of subjectivity rests uneasily with monism. Monism is encapsulated in the title theme of La Mettrie’s (1748 [1912]: 148) treatise L’Homme machine and his summing up: ‘Let us then conclude boldly that man is a machine.’ La Mettrie sought to resolve the problem of Cartesian dualism by proposing to consider all mental faculties as aspects of corporeal or material substance (res extensa). Building upon a passing comment made by psychoanalyst Jacques Lacan in a seminar—in which Lacan urged his audience to read La Mettrie—de Vos (2011: 71) positions La Mettrie as ‘one of the first to understand that with the emergence of science we also see the emergence of the symbolic, mathematized body’ with zero subjectivity. This symbolic body epitomises a paradox of modernity already present in La Mettrie’s thesis: ‘drawing the cogito into the res extensa cannot be achieved without a remainder’ (ibid: 70). There is inevitable subjectivity in imagining oneself as a being with zero subjectivity. This paradox has been endemic to modern psychology ever since the discipline’s formation as a natural science in the late nineteenth century. In his essay ‘Are we automata?’ William James (1879) queried neurologists’ view that subjective states are merely epiphenomena of brain activity. More than a century later, neuroscientists have claimed significant advances towards understanding how the brain generates subjective states. Damasio (1994) revisits the contestable Cartesian separation of body, emotion and rationality. He proposes to resolve it by postulating a brain-based mechanism whereby emotional input may guide rational behaviour and decision-making (the somatic marker hypothesis). Not everyone concurs with Damasio’s conception of the emotional experience. While debates about the details are rife, a widely shared faith in neuroscience as the royal road to understanding subjectivity could be viewed as a triumph of l’homme machine. This triumph makes it conceivable to reverse engineer the brain so as to create a self-aware artificial intelligence—and the technology appears to be catching up.
The technological plausibility of artificial minds gives the subjectivity paradox a new twist. Nowadays we don’t have to imagine ourselves being machines devoid of subjectivity. We imagine machines with subjective states. This fact, the act of imagining such machines—rather than the issue of whether artificial intelligence could be self-aware in the way humans are—gives impetus to the present study. The voluminous literature surrounding machine consciousness is left out of this book.
The analyses reported throughout the following chapters problematize personhood differently by seeking to locate ‘human’ in the technology-centred discourse about socially interactive robots. Depending on the specific purpose for which they are build, such robots may or may not need a consciousness similar to ours. Pragmatically it may be more important to ensure that people experience the interaction with the robot as a natural interaction with other people. Engineers typically translate this problem to the technical challenges of designing robots that give the illusion of making eye contact, expressing emotions, and so forth. Yet when we try to imagine what would make or hinder people from perceiving a good-enough fake as if it is another person, the subjectivity paradox inevitably arises—even if not always confronted in the engineering literature. The subjectivity paradox at this juncture relates to the distinction between the ‘mechanics’ of making eye contact with someone and the experiential quality of a mutual glance (Chapter 7 takes this further). The absence of this quality when people interact with humanoid robots could make people feel ill-at-ease.
The uncanny valley hypothesis, proposed by the Japanese roboticist Mori in 1970 (the topic of Chapter 8), predicts negative emotional reactions to artefacts that are too similar to a human. One compelling explanation is that humanoid robots are ‘potential transgressors, trammelling our sense of identity and purpose’ as human beings due to a deeply held worldview that distinguishes human from nonhuman (MacDorman et al. 2009: 486). MacDorman and his colleagues elaborate the idea in the cognitive-psychological terms of a ‘category boundary problem’: ‘there is something disturbing about things that cross category boundaries’ (ibid: 487). I shall make a similar point, but prefer to term it ontological dissonance. To MacDorman and others in the field of robotics, the resolution of this problem lies in changing our worldview so as to include electromechanical persons in the same category as humans (MacDorman and Cowley 2006; Ramey 2005). However, besides being only an intellectual exercise at present, this solution does not eliminate the ‘subjectivity’ issue epitomised in the mutual glance—the meeting of two souls. Lord Byron powerfully dramatizes the union of irreconcilable opposites. The gate of Heaven, where souls are despatched to one afterlife or the other, is a ‘neutral space’, says the poet,
And therefore Michael and the other wore
A civil aspect; though they did not kiss,
Yet still between his darkness and his Brightness
There passed a mutual glance of great politeness.
(Byron, 1824 [2010]: 499)
I want to underline more prosaically the meeting of opposites—Self and Other, I and You—which constitutes a differentiation that is foundational for consciousness of oneself as a person.
The last century has given rise to a plethora of theories that in various ways, with differing emphases and philosophical lineages, attribute the possibility of having a sense of self to this I–You dialogical space, this state of betweenness that spontaneously happens even in a casual mutual glance. This is the social model of personhood. It is indigenous to Western individualism. This model is inevitably invoked in the discourse of social robotics, at least in the Anglophone world, through the imaginative insertion of a robot into the ‘I–You’. Ripples of the ingrained social model spread out to touch academic specialisms that have made little or no contact with each other. Furthermore, since the industry is significantly led by Japan and South Korea, ripples of Western individualism intermingle with the mode of self-construal in collectivist societies and influences of Eastern systems of thought, such as Buddhism and Confucianism. I’ll return to that East/West contrast at various points throughout the book.
Taken into the (Western) academia in one direction, the ripple effect of the ‘I–You’ in social robotics meets the mind–body problem of how human bodies constructs themselves as persons, an issue that MacDorman (2007) has called ‘the person problem’ (Chapter 5 expands). This classic problem has some pragmatic implications for engineering. If scientists could determine exactly how human bodies become persons, perhaps engineers can make it happen also in electromechanical bodies. Separately and much earlier, G. H. Mead (1934: 137) purported to resolve the problem by reference to language: as a system of symbols, language creates the possibility of referring to oneself, and thus provides the human being with ‘some sort of experience in which the physical organism can become an object to itself’. Language makes it possible for us to enter our own experiences as ‘as a self or individual’ by taking upon us the attitudes of others with whom we share contexts of experience (ibid: 138). This premise has become a staple truth in sociology. As citations in later chapters will attest, the sociological tradition makes it reasonable (at least for its adherents) to suggest that in the near future people will enter their experiences as selves also by sharing contexts with artificial others. I shall query that reasoning. The various elaborations of Mead’s idea during the last century often result in converting the inquiry about subjectivity into empirical description of how people talk about themselves. In contrast, this study aims to describe the subtle ways in which the subjectivity paradox impacts on social robotics—a context in which theories of the self or personhood are occasionally imported but seldom (or never) created. The tacit impact of the social model starts with the very concept of a social robot.
Social robots as discursive productions
Robotics is a rapidly developing branch of engineering, within which social robotics is a field dedicated to designing systems that interact with people. When discourse is understood as a system of statements that involve ‘practices that systematically form the objects of which they speak’ (Foucault: 1969: 54) it is clear that a robot has a dual life. It is both a machine built by engineers and an object created in discourse. While this study centres on the discursive production, my premise is that there are significant discontinuities in terms the psychological functions of robots imagined in fiction, film and art, on the one side, and robots as objects that are formed in discourse about machines with which people actually or potentially interact (Chapter 3 takes a closer look). The realistic possibility of interaction, this switch from as-if to what-if, makes the difference.
Defining the socially interactive robot
Common definitions of a social robot refer to a machine that interacts with people within some defined relational role, such as a companion, tutor or nurse. Some such robots are autonomous, equipped with artificial intelligence that enables the robot to respond to cues from its environment, but some are remotely controlled by a human operator. The sociality of the robot is therefore a property of its interaction with people, not an engineered feature as such. For this book’s purposes, a serviceable definition of social robots is physically embodied intelligent systems that enter social spaces in community and domestic settings. This excludes disembodied automated response systems, search engines, etc. (which are already part of most people’s everyday life). The metaphor of a social space takes the definition beyond merely listing settings in which robots may be installed (hospitals, schools, shops, the home etc.) and towards regarding people’s experience as the salient criterion (e.g. a robot nurse may enter patients’ social space whereas a robot surgeon or a robot janitor in the same hospital wouldn’t).
Strictly speaking, my proposed definition excludes robots designed as labour-saving appliances. Yet researchers who gave American households vacumming robots (Roombas) report that some of the participants came to regard the robot as a social entity; e.g. ascribing lifelike qualities to it, giving it a name, gender and personality (Sung et al. 2010). Having observed that people were demonstrating the robot to visitors, and some took the robot on their vacation to show around, the researchers concluded that even a robot vacuum cleaner can become a ‘mediator that enhances social relationships among household members’ (ibid: 423). Based on the findings, they formulated a conceptual framework (the Domestic Ecology Model) that ‘articulates the holistic and temporal relationships that robots create with surrounding home environment’ (ibid: 428). Washing machines and television sets too could be said to exist in holistic and temporal relationships with the home environment. People often anthropomorphize their cars, computers, or other objects of emotional attachment. Nonetheless, it seems a peculiarity of talking about robots that the machine leaps out of the material environment and into the social. The leap is subtle but implicates a fundamental shift in the understanding of sociality—a shift from construing sociality as a trait of the individual (human or robot) to construing it as a property of the dyadic interaction.
An early definition of social robots highlights the functional similarity to human and animal individuals within a group:
Social robots are embodied agents that are part of a ...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Table of Contents
  6. List of illustrations
  7. 1 Problematizing personhood differently
  8. 2 Means to meaning
  9. 3 The semiotic robot hypothesis
  10. 4 The relationship machine
  11. 5 Voices in the field: the pragmatic engineer, technocentric visionary and inquisitive scientist
  12. 6 Rhetoric and right action ahead of robot nannies
  13. 7 Subversions of subjectivity
  14. 8 Chronotope shifts in the uncanny valley
  15. 9 Narrativity of the act and the new ontology
  16. 10 Futures in the present tense
  17. Index