Chapter 1
Problematizing personhood differently
What is a person? T. R. Miles (1957) tackled this philosophical issue imaginatively. He invited the reader to imagine homo mechanisma: a flesh and blood machine that is capable of producing exactly the same responses we would expect of a human being. Psychologists could measure its IQ and psychiatrists could determine its psychopathology. It would show affection if people were kind to it, anger and dismay if people tried to cheat it, and due appreciation when confronted with a beautiful poem or a beautiful sunset. On what grounds should we deny the machine the status of personhood? This thought experiment may throw light on how âthe concepts âmanâ, âmachineâ, âmindâ and âconsciousnessâ ⌠function at presentâ (ibid: 278). I open with Milesâs little known essay because the present study seeks to do something roughly similar. It interrogates social robotics to see how concepts such as person and robot, human and machine, and subjectivity or the self, operate at present. Technology-wise the present day is quite different from the 1950s, when Miles contrived homo mechanismaâa decade that was launched with Alan Turingâs (1950) seminal paper, which famously set a benchmark for artificial intelligence (AI). The Turing Test is premised on the extent to which people interacting with an AI are fooled into believing that they are interacting with another human. Likewise the premise of Milesâs thought experiment is that people will be completely fooled by the machine. Today, caricatured humanoid robots such as ASIMO (Honda), QRIO (Sony), NAO (Aldebaran Robotics) and iCub (developed as part of the EU project RobotCub) are not designed to fool anyone. Whereas Miles sought to pinpoint existing criteria that may prevent assimilating homo mechanisma into humanity, some roboticists and scientists close to the engineering field call for altering our existing criteria of personhood so as to accommodate the machine. The fieldâs goal ought to be defined in terms of âthe creation of an artificial person, while defining person with language that is free of ⌠the presumption of human superiority,â say MacDorman and Cowley (2006: 378). This view is not promoted by everyone in robotics, but its expression even by a few evinces how conventional conceptions of personhood are being challenged.
Nowadays arguments similar to those delineated by Miles are raised with an entirely different sense of urgency. The imagined object is no longer âa subject of fantastic speculation rather than of practical possibilityâ (Miles 1957: 278), but is regarded as an inevitable technological outcome. Citing the UN Declaration of Rights, Torrance (2008: 501) asked, âare there any circumstances under which it might be morally appropriate for us to consider extending such rights specifically to humanoids that are taken to be devoid of phenomenal consciousness?â The question is rhetorical in the context of his argument, but it is not a thought experiment. It attests to problematizing personhood for a different reason than did Miles. Whereas Miles built his ontological argument around a hypothetical classification dilemma, Torrance points to an issue in applied ethics that will become real (at least according to those who raise it). Miles sought to establish logical criteria for defining personhoodâhow to categorize person/not-person, thereby to answer what makes us âspecialâ. Torrance endorses the âorganic viewâ (his phrase) according to which AIs are unlikely to possess sentience, hence will lack the kind of empathic rationality that is necessary for being a moral agent. The two texts are separated by half a century during which an old fantasy has taken a decisive twist. It has turned from a playful as-if to a realistic what-if.
The subjectivity paradox
Back in the day when humanoid robots were confined to fiction, Miles (1957) dismissed criteria such as natural birth and consciousness as reasons for denying homo mechanisma the status of a person. He proposed that the machineâs lack of a body-schema is a sufficient reason. Irrespective of how cogent is this criterion as far as philosophical arguments go, and aside from the technologyâs advances towards giving a body schema to artificial systems (e.g. Sturm et al. 2009), it is germane here that Miles (a psychologist) invoked our capacity to be reflexively aware of our own embodimentâin a word, subjectivityâas the irrefutable hallmark of personhood.
The invocation of subjectivity rests uneasily with monism. Monism is encapsulated in the title theme of La Mettrieâs (1748 [1912]: 148) treatise LâHomme machine and his summing up: âLet us then conclude boldly that man is a machine.â La Mettrie sought to resolve the problem of Cartesian dualism by proposing to consider all mental faculties as aspects of corporeal or material substance (res extensa). Building upon a passing comment made by psychoanalyst Jacques Lacan in a seminarâin which Lacan urged his audience to read La Mettrieâde Vos (2011: 71) positions La Mettrie as âone of the first to understand that with the emergence of science we also see the emergence of the symbolic, mathematized bodyâ with zero subjectivity. This symbolic body epitomises a paradox of modernity already present in La Mettrieâs thesis: âdrawing the cogito into the res extensa cannot be achieved without a remainderâ (ibid: 70). There is inevitable subjectivity in imagining oneself as a being with zero subjectivity. This paradox has been endemic to modern psychology ever since the disciplineâs formation as a natural science in the late nineteenth century. In his essay âAre we automata?â William James (1879) queried neurologistsâ view that subjective states are merely epiphenomena of brain activity. More than a century later, neuroscientists have claimed significant advances towards understanding how the brain generates subjective states. Damasio (1994) revisits the contestable Cartesian separation of body, emotion and rationality. He proposes to resolve it by postulating a brain-based mechanism whereby emotional input may guide rational behaviour and decision-making (the somatic marker hypothesis). Not everyone concurs with Damasioâs conception of the emotional experience. While debates about the details are rife, a widely shared faith in neuroscience as the royal road to understanding subjectivity could be viewed as a triumph of lâhomme machine. This triumph makes it conceivable to reverse engineer the brain so as to create a self-aware artificial intelligenceâand the technology appears to be catching up.
The technological plausibility of artificial minds gives the subjectivity paradox a new twist. Nowadays we donât have to imagine ourselves being machines devoid of subjectivity. We imagine machines with subjective states. This fact, the act of imagining such machinesârather than the issue of whether artificial intelligence could be self-aware in the way humans areâgives impetus to the present study. The voluminous literature surrounding machine consciousness is left out of this book.
The analyses reported throughout the following chapters problematize personhood differently by seeking to locate âhumanâ in the technology-centred discourse about socially interactive robots. Depending on the specific purpose for which they are build, such robots may or may not need a consciousness similar to ours. Pragmatically it may be more important to ensure that people experience the interaction with the robot as a natural interaction with other people. Engineers typically translate this problem to the technical challenges of designing robots that give the illusion of making eye contact, expressing emotions, and so forth. Yet when we try to imagine what would make or hinder people from perceiving a good-enough fake as if it is another person, the subjectivity paradox inevitably arisesâeven if not always confronted in the engineering literature. The subjectivity paradox at this juncture relates to the distinction between the âmechanicsâ of making eye contact with someone and the experiential quality of a mutual glance (Chapter 7 takes this further). The absence of this quality when people interact with humanoid robots could make people feel ill-at-ease.
The uncanny valley hypothesis, proposed by the Japanese roboticist Mori in 1970 (the topic of Chapter 8), predicts negative emotional reactions to artefacts that are too similar to a human. One compelling explanation is that humanoid robots are âpotential transgressors, trammelling our sense of identity and purposeâ as human beings due to a deeply held worldview that distinguishes human from nonhuman (MacDorman et al. 2009: 486). MacDorman and his colleagues elaborate the idea in the cognitive-psychological terms of a âcategory boundary problemâ: âthere is something disturbing about things that cross category boundariesâ (ibid: 487). I shall make a similar point, but prefer to term it ontological dissonance. To MacDorman and others in the field of robotics, the resolution of this problem lies in changing our worldview so as to include electromechanical persons in the same category as humans (MacDorman and Cowley 2006; Ramey 2005). However, besides being only an intellectual exercise at present, this solution does not eliminate the âsubjectivityâ issue epitomised in the mutual glanceâthe meeting of two souls. Lord Byron powerfully dramatizes the union of irreconcilable opposites. The gate of Heaven, where souls are despatched to one afterlife or the other, is a âneutral spaceâ, says the poet,
And therefore Michael and the other wore
A civil aspect; though they did not kiss,
Yet still between his darkness and his Brightness
There passed a mutual glance of great politeness.
(Byron, 1824 [2010]: 499)
I want to underline more prosaically the meeting of oppositesâSelf and Other, I and Youâwhich constitutes a differentiation that is foundational for consciousness of oneself as a person.
The last century has given rise to a plethora of theories that in various ways, with differing emphases and philosophical lineages, attribute the possibility of having a sense of self to this IâYou dialogical space, this state of betweenness that spontaneously happens even in a casual mutual glance. This is the social model of personhood. It is indigenous to Western individualism. This model is inevitably invoked in the discourse of social robotics, at least in the Anglophone world, through the imaginative insertion of a robot into the âIâYouâ. Ripples of the ingrained social model spread out to touch academic specialisms that have made little or no contact with each other. Furthermore, since the industry is significantly led by Japan and South Korea, ripples of Western individualism intermingle with the mode of self-construal in collectivist societies and influences of Eastern systems of thought, such as Buddhism and Confucianism. Iâll return to that East/West contrast at various points throughout the book.
Taken into the (Western) academia in one direction, the ripple effect of the âIâYouâ in social robotics meets the mindâbody problem of how human bodies constructs themselves as persons, an issue that MacDorman (2007) has called âthe person problemâ (Chapter 5 expands). This classic problem has some pragmatic implications for engineering. If scientists could determine exactly how human bodies become persons, perhaps engineers can make it happen also in electromechanical bodies. Separately and much earlier, G. H. Mead (1934: 137) purported to resolve the problem by reference to language: as a system of symbols, language creates the possibility of referring to oneself, and thus provides the human being with âsome sort of experience in which the physical organism can become an object to itselfâ. Language makes it possible for us to enter our own experiences as âas a self or individualâ by taking upon us the attitudes of others with whom we share contexts of experience (ibid: 138). This premise has become a staple truth in sociology. As citations in later chapters will attest, the sociological tradition makes it reasonable (at least for its adherents) to suggest that in the near future people will enter their experiences as selves also by sharing contexts with artificial others. I shall query that reasoning. The various elaborations of Meadâs idea during the last century often result in converting the inquiry about subjectivity into empirical description of how people talk about themselves. In contrast, this study aims to describe the subtle ways in which the subjectivity paradox impacts on social roboticsâa context in which theories of the self or personhood are occasionally imported but seldom (or never) created. The tacit impact of the social model starts with the very concept of a social robot.
Social robots as discursive productions
Robotics is a rapidly developing branch of engineering, within which social robotics is a field dedicated to designing systems that interact with people. When discourse is understood as a system of statements that involve âpractices that systematically form the objects of which they speakâ (Foucault: 1969: 54) it is clear that a robot has a dual life. It is both a machine built by engineers and an object created in discourse. While this study centres on the discursive production, my premise is that there are significant discontinuities in terms the psychological functions of robots imagined in fiction, film and art, on the one side, and robots as objects that are formed in discourse about machines with which people actually or potentially interact (Chapter 3 takes a closer look). The realistic possibility of interaction, this switch from as-if to what-if, makes the difference.
Defining the socially interactive robot
Common definitions of a social robot refer to a machine that interacts with people within some defined relational role, such as a companion, tutor or nurse. Some such robots are autonomous, equipped with artificial intelligence that enables the robot to respond to cues from its environment, but some are remotely controlled by a human operator. The sociality of the robot is therefore a property of its interaction with people, not an engineered feature as such. For this bookâs purposes, a serviceable definition of social robots is physically embodied intelligent systems that enter social spaces in community and domestic settings. This excludes disembodied automated response systems, search engines, etc. (which are already part of most peopleâs everyday life). The metaphor of a social space takes the definition beyond merely listing settings in which robots may be installed (hospitals, schools, shops, the home etc.) and towards regarding peopleâs experience as the salient criterion (e.g. a robot nurse may enter patientsâ social space whereas a robot surgeon or a robot janitor in the same hospital wouldnât).
Strictly speaking, my proposed definition excludes robots designed as labour-saving appliances. Yet researchers who gave American households vacumming robots (Roombas) report that some of the participants came to regard the robot as a social entity; e.g. ascribing lifelike qualities to it, giving it a name, gender and personality (Sung et al. 2010). Having observed that people were demonstrating the robot to visitors, and some took the robot on their vacation to show around, the researchers concluded that even a robot vacuum cleaner can become a âmediator that enhances social relationships among household membersâ (ibid: 423). Based on the findings, they formulated a conceptual framework (the Domestic Ecology Model) that âarticulates the holistic and temporal relationships that robots create with surrounding home environmentâ (ibid: 428). Washing machines and television sets too could be said to exist in holistic and temporal relationships with the home environment. People often anthropomorphize their cars, computers, or other objects of emotional attachment. Nonetheless, it seems a peculiarity of talking about robots that the machine leaps out of the material environment and into the social. The leap is subtle but implicates a fundamental shift in the understanding of socialityâa shift from construing sociality as a trait of the individual (human or robot) to construing it as a property of the dyadic interaction.
An early definition of social robots highlights the functional similarity to human and animal individuals within a group:
Social robots are embodied agents that are part of a ...