1 Introduction
Over a century since Robert Tressellâs prescient novel, the unsettling reality of technology replacing humans continues. A tidal wave of messianic worship for AI, robotics, âBig Data,â âInternet of Thingsâ is upon us, mainly articulated through the efficiency paradigmâimproving productivity, enhancing human capabilities, reducing time spent on mundane tasks. From algorithms that determine student grades, personalize online marketing, approve financial credit applications, assess pre-trial bail risk, and select human targets in warfare, it seems we are willingly complicit in relinquishing decision-making powers to machines. As Tressell reminds us, we need to understand who âThese Fewâ are controlling the technology and to what purpose it is put, rather than completely repudiate technological innovation (2004). The Nobel Prize-winning economist Joseph Stiglitz (2018) warns that without governmental policies that support sharing of increased productivity from AI across society, there will be rising unemployment, lower wages, and acute social inequalities. Against this backdrop of political, social, and economic challenges, viewed from a moral philosophical perspective, unfettered use of AI that diminishes human agency and decision-making powers undermines human dignity. Detrimental impact of AI on human dignity is not so easily understood, especially when its justification is presented as some sort of a gain for humanity; saving time, energy, or delegating routine tasks. But human interaction that is mediated by technology penetrates the core of what it means to be human; autonomy and agency to engage in free-thinking, and exercise reasoning, judgement, and choice. This is the moral value of human dignity.
In this chapter, I argue that human dignity is a universal moral value that should be at the center of policy formulation and laws governing AI innovation and impact on societies. Part 2 sets out concerns about AI innovation and its potential adverse impact on human dignity. Part 3 considers how diverse cultures, international legal instruments, and constitutional laws represent human dignity as innate human worthiness that is a universal moral value, a right, and a duty. Part 4 develops two distinct dimensions of human dignity which can be concretized in policy and law relating to AI: (1) recognition of the status of human beings as agents with autonomy and rational capacity to exercise reasoning, judgement, and choice; and (2) respectful treatment of human agents so that their autonomy and rational capacity are not diminished or lost through interaction with or use of the technology.
2 AI Innovation and Impact on Human Dignity
It is impressive how AI is being developed for use in different domains and real-life settingsâalgorithms determining student grades, personalizing online marketing, approving financial credit applications, assessing pre-trial bail risk, and selecting human targets in warfare. But is it morally right to be deploying AI in such scenarios when inanimate deterministic activities have human consequences? In the UK and Europe, the ongoing COVID-19 pandemic has meant students were unable to sit exams necessary for entry into university. Instead, predictive algorithms, relying on past student performance and averaging determined grades, led to anomalies, bias, and unfair results (Zimmermann, 2021). With clear consequences for future educational and employment prospects, it seems immoral and reckless to have algorithms performing grading functions that reduce individual students to mere statistics without applying human judgement. Applying data processing and personal data rights contained under the EU General Data Protection Regulation (GDPR, European Parliament and Council of the European Union, 2016), the Norwegian Data Protection Authority claimed the International Baccalaureate Organisation breached Articles 5(1)(a) and 5(1)(d) in using a profiling algorithm which did not process student grades fairly, accurately, and transparently (2020). It requested rectification of grades.
Pre-trial bail risk algorithms used to assist human decision-making may seem like good examples of human-machine interaction. But poor dataset reliance and automation bias on the part of the human result in unfair outcomes. In the United States, a pre-trial bail risk assessment algorithmâused by judges to decide whether to release a defendant on bail or to remand them in custodyâhas come under increasing scrutiny. Among others, the Pretrial Justice Institute, a nonprofit organization previously advocating use of algorithms instead of cash bail, withdrew support for their use because such algorithms perpetuate racial inequities (2020; Open Letter by Academics, 2019). And at the extreme end of warfare, an algorithm may be determining who should be selected and attacked as a military objective leading to injury and death (Ulgen, 2019b). Unfairness, inequalities, restrictions on liberty, and life or death decisions form a concerning list of real human consequences as a result of AI systems.
Reflecting on the relationship between man and technology, throughout human history societal changes occurred as a result of new knowledge and technological innovation. Economic historians refer to four phases of innovation shaping economic development: the mechanization of textile manufacturing; railroads and steam from 1840 to 1890; steel, engineering, and electricity from 1890 to 1930; and automobile, fossil fuel, and aviation from 1930 to 1990 (Freeman & Louçã, 2001; Rosenberg & Birdzell, 1986). AI-based technologies fall into the post-1990 economic development phase. This âfourth revolutionâ includes information and communication technologies, AI, and autonomous robotics impacting every aspect of our lives today (Floridi, 2014). Yet a single invention cannot be the sum of our lives, problems, or solutions.
The drive toward greater efficiency and increased productivity precipitates the AI innovation Ferris-wheel; a never-ending cycle of innovation to counter human fallibility that rewards slavish adoption and punishes the reticent human mind. Byung-Chul Han (2017) refers to this as âpsychopoliticsâ; a form of control of the human psyche exerted by technological domination and use of personal data in the public and private spheres that alters our minds and behavior to an extent that undermines our autonomy and agency. If we are constantly having to sync different platforms, update new software, connect systems with systems so that we can access even bigger systems, we are losing sight of ourselves and getting entangled in a techno-bureaucracy purposely constructed by two strange bedfellows: the regulators and the hackers. Both contribute to the crisis of the self.
2.1 The Techno-Bureaucracy of Hackers and Regulators
Hackers want to explore and exploit new technology vulnerabilities to serve their own illicit purposes, thereby increasing demand for higher security measures from regulators. Regulators (seemingly concerned with human well-being and protection of rights) introduce layers of complexity through overlapping and competing non-legally binding and legally-binding rules, ethical principles, and processes contained in global, regional, and national ethical frameworks, standards, and instruments (e.g., GDPR, 2016; EU AI Guidelines, 2019; G20, 2019; IEEE, 2019; OECD, 2019; UN Secretary Generalâs High Level Panel on Digital Cooperation, 2019; AI Act Proposal, European Parliament and Council of the European Union, 2021). Meanwhile, private sector corporate entities, the military, and the state continue to develop AI under the radar of any enforceable regulation.
It is unclear how divergent ethical/legal initiatives apply across jurisdictions and alongside national legislation. The rules, principles, and processes are often impenetrable to the ordinary person. Take for example the legal concept of âresponsibilityâ determining who or what will be held liable for any harm/damage caused by the technology, AI has potential to disrupt the attribution and causation chains unless there is always a human who will be held responsible throughout AI design, development, and deployment stages.
Self-learning algorithms and robots present the spectre of harmful and unattributable behaviors, which at the same time undermine human agency of foresight, prudence, and judgement in taking action with consequences in mind. Although responsibility is a priority ethical value and legal requirement contained in several global, regional, and national regulatory frameworks, its interpretation and implementation differs.
The UK recognizes legal responsibility, accountability, and legal liability as key issues in application of the law to AI, but focuses on developing principles of accountability and intelligibility (which are not the same as legal responsibility or liability) with possible review of the adequacy of existing legislation on legal liability (UK House of Lords Select Committee, 2018). For China, although responsibility is a core principle applicable at both the AI development and deployment stages, it is situated within an ethical framework biased toward commercial exploitation for the purpose of domestic economic growth.
It is unclear who or what will be held legally responsible, and future policies/laws may contain a commercial intellectual property/trade secrets exemption preventing disclosure of algorithmic models, datasets, and algorithmic reasoning (Standards Administration of China, 2018).
2.2 Freeing or Enslaving?
Whether AI-based solutions to everyday tasks are freeing or enslaving impacts on the crisis of the self. Does AI free up the human mind to undertake qualitative judgement-based complex tasks instead of routine memorizing numbers, memory recall, and mental arithmetic? Or, is more time spent frustrated by the technology (how it works, errors it produces, and rectification of errors and seeking redress)? In theory, more AI-assisting jobs should be available leaving routine tasks to machines. In practice, such jobs are few and far between with not enough training offered by employers to make the transition from displacement by machine to human-machine teaming (e.g., Semuels, 2020).
Among other mental tasks, recall and mental arithmetic stimulate the brain. Arguably, if we become dependent on technology for the simplest of tasks, we are enslaved by the technology and forget how to function. Automation bias is a manifestation of such enslavement whereby in human-machine tasks, the human operator favors the machineâs response over their own judgement with major repercussions for lives and livelihoods (Cummings, 2004; Raja & Dietrich, 2010).
De-skilling may also occur through automata behavior exhibited in humans reduced to binary responses without independent critical thinking or judgement. Studies show that heavy use of digital technologies cause neurological changes that impede comprehension, retention, and deeper thinking (DeStefano & LeFevre, 2007; Small &Vorgan, 2008; Sweller, 1999; Zhu, 1999). This diminishes human agency and dignity with potentially serious repercussions for other humans. Remote pilots of unmanned armed aerial vehicles, for instance, thousands of miles away from conflict zones viewing video images of targets to select and attack, have been shown to exhibit moral disengagement and lack of deeper thinking. They are less fearful of being killed and less inhibited to kill. They have problems identifying targets, and reduced situational awareness in complex scenarios resulting in civilian...