1 Introduction
Digital society's techno-totalitarian matrix
Ismael Al-Amoudi and Emmanuel Lazega
In William Gibsonâs Matrix trilogy, most humans live Hobbesian lives: solitary, poor, nasty, brutish and short. Privately owned companies exert a de facto monopoly on technology and violence through the use of subservient âsalary-menâ and through mastery of expensive technologies for spying on and killing, but also for upgrading and downgrading human beings. Throughout a complex plot, humans of various levels of enhancement are manipulated by artificial intelligences (AIs) that seek to bypass the material safeguards and limitations imposed on them by their human creators and owners. These artificial intelligences appear to have developed some form of consciousness, though one that is very, if perhaps not radically, remote from human consciousness. As the story ends, the uploaded mind of a dead protagonist marries his beloved in the Matrix while AIs start colonising a nearby galaxy.
When the Matrix trilogy was published in the mid-1980s, it introduced to mass culture a number of post-human tropes that have haunted our collective imaginaries ever since. The most noted is arguably the eponymous Matrix, an information network that prefigures the development of the World Wide Web. But the Matrix trilogy also contains several other themes that inspired not only subsequent science fiction writers and cyberpunk fashionistas but also many of the scientists, engineers, entrepreneurs and intellectuals who invented, designed, marketed and commented on the technologies born at the turn of the 21st century.
Indeed, while conscious machines do not exist in 2018, the questions about their possibility in principle and about the process through which they may emerge remain open (Archer, present volume). While mind-uploading is still a fantasy, an increasing amount of human interaction, including the intimate, happens electronically and via online social networks (Donati, present volume). While interactions with bots and robots remain less baffling than in the Matrix, their pervasiveness already raises questions about the emergence of new forms of sociality (Maccarini, present volume). While the domination of a small elite that reaps the benefits of technology is not as stark as in the Matrix, current automation trends are largely excluded from public debate and left to a few powerful actors, public and private, who seek to influence rather than inform citizens and their representatives (Morgan, present volume). While artificial intelligence (AI) is not capable of consciously manipulating human beings for its own concerns, it has already started to bear on normative decisions in ways that undermine the ethics of human discussion (Al-Amoudi & Latsis, present volume). Finally, while turn-of-the-century soldiers and hit-men do not benefit from the extraordinary healthcare imagined by Gibson, they can already count on AI-based systems of operational support that multiply combat efficiency at the expense of team oppositional solidarity and personal tacit knowledge (Lazega, present volume), thus generating new organisational models.
The aim of our book, however, is neither to marvel at Gibsonâs prophetic vision nor to describe the gap that still exists between science fiction and science tout court. Our purpose is rather to discuss the social significance of phenomena we can already know. We try to understand how post-human technological developments, and especially AI, have started to transform our human agency but also the basic institutions and organisations that hold contemporary societies together: the family (Donati, present volume) and the household (Maccarini, present volume), but also commercial corporations (Morgan, present volume), health institutions and organisations (Al-Amoudi & Latsis, present volume), and the military writ large (Lazega, present volume).
Our collective book opens with a broad but reflexive literature review by Douglas V. Porpora on AI and human enhancement. The review indicates that, while books on AI started to appear in the 1960s, the topic reached a peak of popularity in the 1980s and, in spite of a slight decline, has remained fairly popular since then. But Porporaâs review also provides insight about what the press has to say about AI. To do so, he has examined all articles published on AI by the International New York Times (INYT) over a period of 50 days randomly selected in the year 2017. The articles gathered over that period provide a reasonably representative sample of how AI is discussed in connection with four broad themes: social developments; the economy; innovation and capacities; and the arts.
Porporaâs review is doubly useful for our project, both because it provides a refresher about the new artefacts, practices and institutions emerging as we write and because it helps appreciate some of the limits of media discourse on AI. For instance, a number of articles express dismay in the aftermath of an AIâs victory over the human Go champion. The articlesâ dismay is based, however, on the widespread assumption that mastery of Go is indicative of human-like âintelligenceâ. Yet, the contribution of Archer to the present volume (Archer, present volume; see also Donati, 2019; Morgan, 2019; Porpora, 2019) suggests otherwise: AIs might indeed reach capacities of equal (if not superior) power and worth as human minds; however, what is specific to and valuable about human minds is not so much their computational capacities as much as their endowment with a first-person perspective (Baker, 2000) and their capacity to identify concerns (Archer, 2000) and subsequently reflect on them (Archer 2007, 2010, 2012). But Archerâs contribution to the present volume offers more than a mere philosophical critique of misplaced journalistic dismay; it also describes a plausible process through which an AI (as we know them in 2018) might come to acquire, through interaction with a human being, the essential characteristics of human mind: a first-person perspective, concerns and reflexivity.
One of the INYT streams witnessed by Porpora relates to what he calls social developments and reports on the pervasiveness of artificially âintelligentâ machines in daily life. While most articles assume that the threat of âtechnological singularityâ is still remote and that machines are not even close to over-ruling the world of humans (Kurzweil, 2005), many articles report on the AI-empowerment of familiar objects (e.g. cars, home appliances) and on the appearance of AI-equipped commercial sexbots, that is, machines specifically designed for their ownersâ sexual enjoyment. The INYT articles do not seem to notice, however, the significance of these technological developments for human sociality. How does living in a world populated with AIs bear on our human capacity to initiate, foster and steer meaningful relations with others?
Yet, the question of human sociality is at the heart of the chapters written by Andrea M. Maccarini and Pierpaolo Donati in the present book. Maccarini (present volume) asks whether interaction with AI-powered machines might encourage people to prefer âpure relationsâ with AI machines that are devoid both of bodily imperfections and of the character flaws so common to humanity: impatience, envy, laziness and so forth.
Donatiâs chapter also addresses the evolution of human sociality, though through slightly different concepts. Donati posits the existence and relative unity of a digital matrix consisting in âthe globalised symbolic code from which digital artefacts are created in order to help or substitute human agency by mediating inter-human relations or by making them superfluousâ (present volume, p. 105). But the recent emergence of the digital matrix is not, Donati argues, an innocuous addition to the social world. Rather, it deeply transforms and even hybridises social relations and the operations of human minds: âThe hybridised family turns out to be a family in which everyone is âalone, togetherâ. Relationships are privatised within the already private space of the family, while the identities of the members travel in the public space of the DM [digital matrix]â (Donati, present volume, p. 111).
Another INYT stream of articles discusses AIâs economic implications. Most of these express anxiety about job destruction together with an interrogation about whether or not AI systems will replace or complement human labour. But the press articles also take at face value the estimates produced and circulated by governmental agencies and influential consultancies and think tanks. And this is precisely where Morganâs contribution to the present volume starts: since âlittle attention was paid in the press to what the economic models were actually claiming or how they were constructedâ (Morgan, present volume, p. 94), Morgan proposes to unpack the analyses and assumptions of the UK Made Smarter Review, an influential positional document on the economic consequences of AI technologies. Among other findings, Morgan shows in detail how the report moves from relatively fragile assumptions to seemingly objective figures and from speculation on a fundamentally open future to claims that âthere is no alternativeâ but to automate quickly enough to safeguard the competetivness of national firms, thus pitting nations against each other in what could become a race to the bottom.
The third INYT stream identified by Porpora discusses AIsâ capacities and limitations. In the face of AIâs victory at Go (mentioned previously), human-like capacities of calculation and even of intuition are no longer humanityâs preserve. Yet, as several of the shortlisted articles remind us (following Bostrom, 2016), AI programmes are still highly specialized, and those capable of beating a Go champion are incapable of driving a car and vice versa. The implication is not only that AIs are still poor improvisers in unfamiliar contexts but also that they are arguably remote from having their own moral powers or, as Jim Kerstetter has it, âThe better question might be: how do you teach a computer to be offended?â (Kersteter, 2017, cited in Porpora, present volume).
Taking stock of AIâs moral limitations, Al-Amoudi and Latsis (present volume) ask a slightly different, arguably overlooked but equally important, question: How does reliance on AI affect the capacity of human beings to discuss normative decisions? While their discussion is centred on public health, their findings are relevant to a wider array of industries and normative discussions.
In the same vein, Lazega (present volume) tracks how AI increases the capacities of command and control in organisations to unobtrusively shape interactions and parametrise collective agency between humans. A military template for this extension of AI both analyses real-time information from multiple sources and uses digital tools engineered to apply mathematical models of animal swarms for the management of army units operating under high stress in battlegrounds. This involves homogenising mental maps, anticipating response to enemy moves, manipulating emotional reactions, suggesting courses of action, preventing improvised deeds, and defusing oppositional solidarities. Although this capacity deals with soldiers, it could generate new organisational models for non-military organisations, in line with a tradition of military and war technology that have long shaped society at large (Centeno & Enriquez, 2016).
Organisational society: smart machines as agents of further bureaucratisation?
Organisational approaches are useful, and perhaps unavoidable, when reflecting on contemporary challenges to the human condition. Over the last couple of centuries, Weberian sociologists such as Presthus (Organizational Society), Jacobi (The Bureaucratization of the World), Stone (Where the Law Ends) and Coleman (The Asymetric Society) raised concerns over the growing importance and even colonisation (Deetz, 1992) of most areas of social life by large private organisations. In the words of Charles Perrow:
The appearance of large organizations in the United States makes organizations the key phenomenon of our time, and thus politics, social class, economics, technology, religion, the family, and even social psychology take on the character of dependent variables. Their subject matter is conditioned by the presence of organizations to such a degree that, increasingly, since about 1820 in the United States at least, the study of organizations must precede their own inquiries.
(Perrow, 1991: 725)
To understand contemporary social change in such organisational societies, two ideal types of organised collective action have been identified: bureaucracy and collegiality (Lazega, 2017, forthcoming). These ideal types, each with its specific formal and its informal dimensions, combine social discipline and productive efficiency; they can be observed in real-life companies, associations, cooperatives, public authorities and so forth. The ideal types of bureaucracy and collegiality help us understand the organisational context of work practices, be they routine or innovative. In this dual logics approach, the bureaucratic model is generally employed to organise collective routine work while concentrating power unobtrusively: command and control at the top and depersonalised interactions among subaltern members. The collegial model, on the other hand, is usually observable in situations requiring collective innovative work with unpredictable output. Through collegial organisation, rival peers self-govern by deliberation and agreements or consensus building and by using personalised relationships and relational infrastructures to manage coordination and cooperation dilemmas.
But the ideal types of bureaucracy and collegiality are seldom present in their pure form throughout any single organisation. Rather, real-life workplaces, communities, markets and societies are replete with combinations of collegiality and bureaucracy. Indeed, organisations that can be called âbureaucraticâ (e.g. airlines) are nonetheless managed by a collegial top-team who maintain highly personalised relationships, and conversely, collegial organisations (e.g. private law firms) typically rely on bureaucratically organised support services which interact in largely impersonal ways.
If a lead is taken from the articulation of personalised collegiality and impersonal bureaucracy, the digitalisation of society can be interpreted as both cause and symptom of further and deeper bureaucratisation of society. Does this mean that impersonal interactions, routines, hierarchies and mass production will increasingly characterise our bureaucratised and technocratic contemporary societies? Contributions to this volume address this and underlying issues at varying levels of generality.
Donati argues that human relations are hybridised and even threatened when they are mediated by digital media and smart bots. We are left, however, with the question of how far the depersonalisation of relations can go. Indeed, is a world with human beings but with no personal relations possible in the first place? Or does the digital world necessarily encompass a combination of impersonal transactions and personalised relationships?
Here, a century of organisational sociology and discussion of the bureaucratic model can help us answer. We may draw, in particular, from Jean-Daniel Reynaudâs (1989) theory of joint regulation of collective action. From this perspective, there is one dimension of the organisation of collective action that cannot be routinised and that reflects the limits of bureaucracy: the micro- and meso-political negotiation of the ârules of the gameâ. This negotiation fleshes out the normative and moral dimension of action, a process of structural and cultural re/production that is never routine and that escapes the capacities of our very best AIs (Al-Amoudi & Latsis, present volume). Apart from the extreme case of totalitarianism (more on this later in this chapter), organisational members do not assume that complete planification and prediction are achievable or even desirable. Continuous coordination of activities is achievable through common (though necessarily incomplete; see Al-Amoudi, 2010) rules but also through a collective (if contested) project; through (relatively widely) shared cultural schemes of interpretation; and through (reasonably) congruent moral commitments. But the involvement of all actors, even those most subject to bureaucratic control, in negotiation and sense-making does not mean that all are equal in their capacity to defend their regulatory interests. Indeed, the regulatory process produces its share of winners and losers, so much so that Reynaud insightfully reinterprets change and new norms as broken promises. New rules produced by the regulatory process create losers who need to reorganise their practice and joint activities based on the new rules, which leads to the issue of how to handle these losers in bureaucratic contexts and in more collegial ones. For our bookâs concerns, this means that digitalisation, robots and artificial intelligence are likely to weaken the capacity of most people to defend their regulatory interests but are nonetheless unlikely to eradicate personal relations from the face of society.
Reynaudâs reflections on...