The Frontlines of Artificial Intelligence Ethics
eBook - ePub

The Frontlines of Artificial Intelligence Ethics

Human-Centric Perspectives on Technology's Advance

Andrew J. Hampton, Jeanine A. DeFalco, Andrew J. Hampton, Jeanine A. DeFalco

Share book
  1. 212 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Frontlines of Artificial Intelligence Ethics

Human-Centric Perspectives on Technology's Advance

Andrew J. Hampton, Jeanine A. DeFalco, Andrew J. Hampton, Jeanine A. DeFalco

Book details
Book preview
Table of contents
Citations

About This Book

This foundational text examines the intersection of AI, psychology, and ethics, laying the groundwork for the importance of ethical considerations in the design and implementation of technologically supported education, decision support, and leadership training.

AI already affects our lives profoundly, in ways both mundane and sensational, obvious and opaque. Much academic and industrial effort has considered the implications of this AI revolution from technical and economic perspectives, but the more personal, humanistic impact of these changes has often been relegated to anecdotal evidence in service to a broader frame of reference. Offering a unique perspective on the emerging social relationships between people and AI agents and systems, Hampton and DeFalco present cutting-edge research from leading academics, professionals, and policy standards advocates on the psychological impact of the AI revolution. Structured into three parts, the book explores the history of data science, technology in education, and combatting machine learning bias, as well as future directions for the emerging field, bringing the research into the active consideration of those in positions of authority.

Exploring how AI can support expert, creative, and ethical decision making in both people and virtual human agents, this is essential reading for students, researchers, and professionals in AI, psychology, ethics, engineering education, and leadership, particularly military leadership.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is The Frontlines of Artificial Intelligence Ethics an online PDF/ePUB?
Yes, you can access The Frontlines of Artificial Intelligence Ethics by Andrew J. Hampton, Jeanine A. DeFalco, Andrew J. Hampton, Jeanine A. DeFalco in PDF and/or ePUB format, as well as other popular books in Philosophy & Ethics & Moral Philosophy. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2022
ISBN
9781000576207

Part 1 Surveying the AI Landscape

Chapter 1 AI and the Crisis of the Self Protecting Human Dignity as Status and Respectful Treatment

Ozlem Ulgen
DOI: 10.4324/9781003030928-3

 a great deal of the work which was formerly done by human beings is now being done by machinery. This machinery belongs to a few people: it is being worked for the benefit of those few, just the same as were the human beings it displaced. These Few have no longer any need of the services of so many human workers, so they propose to exterminate them! The unnecessary human beings are to be allowed to starve to death! And they are also to be taught that it is wrong to marry and breed children, because the Sacred Few do not require so many people to work for them as before!
(Tressell, 2004, p. 114)

1 Introduction

Over a century since Robert Tressell’s prescient novel, the unsettling reality of technology replacing humans continues. A tidal wave of messianic worship for AI, robotics, “Big Data,” “Internet of Things” is upon us, mainly articulated through the efficiency paradigm—improving productivity, enhancing human capabilities, reducing time spent on mundane tasks. From algorithms that determine student grades, personalize online marketing, approve financial credit applications, assess pre-trial bail risk, and select human targets in warfare, it seems we are willingly complicit in relinquishing decision-making powers to machines. As Tressell reminds us, we need to understand who “These Few” are controlling the technology and to what purpose it is put, rather than completely repudiate technological innovation (2004). The Nobel Prize-winning economist Joseph Stiglitz (2018) warns that without governmental policies that support sharing of increased productivity from AI across society, there will be rising unemployment, lower wages, and acute social inequalities. Against this backdrop of political, social, and economic challenges, viewed from a moral philosophical perspective, unfettered use of AI that diminishes human agency and decision-making powers undermines human dignity. Detrimental impact of AI on human dignity is not so easily understood, especially when its justification is presented as some sort of a gain for humanity; saving time, energy, or delegating routine tasks. But human interaction that is mediated by technology penetrates the core of what it means to be human; autonomy and agency to engage in free-thinking, and exercise reasoning, judgement, and choice. This is the moral value of human dignity.
In this chapter, I argue that human dignity is a universal moral value that should be at the center of policy formulation and laws governing AI innovation and impact on societies. Part 2 sets out concerns about AI innovation and its potential adverse impact on human dignity. Part 3 considers how diverse cultures, international legal instruments, and constitutional laws represent human dignity as innate human worthiness that is a universal moral value, a right, and a duty. Part 4 develops two distinct dimensions of human dignity which can be concretized in policy and law relating to AI: (1) recognition of the status of human beings as agents with autonomy and rational capacity to exercise reasoning, judgement, and choice; and (2) respectful treatment of human agents so that their autonomy and rational capacity are not diminished or lost through interaction with or use of the technology.

2 AI Innovation and Impact on Human Dignity

It is impressive how AI is being developed for use in different domains and real-life settings—algorithms determining student grades, personalizing online marketing, approving financial credit applications, assessing pre-trial bail risk, and selecting human targets in warfare. But is it morally right to be deploying AI in such scenarios when inanimate deterministic activities have human consequences? In the UK and Europe, the ongoing COVID-19 pandemic has meant students were unable to sit exams necessary for entry into university. Instead, predictive algorithms, relying on past student performance and averaging determined grades, led to anomalies, bias, and unfair results (Zimmermann, 2021). With clear consequences for future educational and employment prospects, it seems immoral and reckless to have algorithms performing grading functions that reduce individual students to mere statistics without applying human judgement. Applying data processing and personal data rights contained under the EU General Data Protection Regulation (GDPR, European Parliament and Council of the European Union, 2016), the Norwegian Data Protection Authority claimed the International Baccalaureate Organisation breached Articles 5(1)(a) and 5(1)(d) in using a profiling algorithm which did not process student grades fairly, accurately, and transparently (2020). It requested rectification of grades.
Pre-trial bail risk algorithms used to assist human decision-making may seem like good examples of human-machine interaction. But poor dataset reliance and automation bias on the part of the human result in unfair outcomes. In the United States, a pre-trial bail risk assessment algorithm—used by judges to decide whether to release a defendant on bail or to remand them in custody—has come under increasing scrutiny. Among others, the Pretrial Justice Institute, a nonprofit organization previously advocating use of algorithms instead of cash bail, withdrew support for their use because such algorithms perpetuate racial inequities (2020; Open Letter by Academics, 2019). And at the extreme end of warfare, an algorithm may be determining who should be selected and attacked as a military objective leading to injury and death (Ulgen, 2019b). Unfairness, inequalities, restrictions on liberty, and life or death decisions form a concerning list of real human consequences as a result of AI systems.
Reflecting on the relationship between man and technology, throughout human history societal changes occurred as a result of new knowledge and technological innovation. Economic historians refer to four phases of innovation shaping economic development: the mechanization of textile manufacturing; railroads and steam from 1840 to 1890; steel, engineering, and electricity from 1890 to 1930; and automobile, fossil fuel, and aviation from 1930 to 1990 (Freeman & Louçã, 2001; Rosenberg & Birdzell, 1986). AI-based technologies fall into the post-1990 economic development phase. This “fourth revolution” includes information and communication technologies, AI, and autonomous robotics impacting every aspect of our lives today (Floridi, 2014). Yet a single invention cannot be the sum of our lives, problems, or solutions.
The drive toward greater efficiency and increased productivity precipitates the AI innovation Ferris-wheel; a never-ending cycle of innovation to counter human fallibility that rewards slavish adoption and punishes the reticent human mind. Byung-Chul Han (2017) refers to this as “psychopolitics”; a form of control of the human psyche exerted by technological domination and use of personal data in the public and private spheres that alters our minds and behavior to an extent that undermines our autonomy and agency. If we are constantly having to sync different platforms, update new software, connect systems with systems so that we can access even bigger systems, we are losing sight of ourselves and getting entangled in a techno-bureaucracy purposely constructed by two strange bedfellows: the regulators and the hackers. Both contribute to the crisis of the self.

2.1 The Techno-Bureaucracy of Hackers and Regulators

Hackers want to explore and exploit new technology vulnerabilities to serve their own illicit purposes, thereby increasing demand for higher security measures from regulators. Regulators (seemingly concerned with human well-being and protection of rights) introduce layers of complexity through overlapping and competing non-legally binding and legally-binding rules, ethical principles, and processes contained in global, regional, and national ethical frameworks, standards, and instruments (e.g., GDPR, 2016; EU AI Guidelines, 2019; G20, 2019; IEEE, 2019; OECD, 2019; UN Secretary General’s High Level Panel on Digital Cooperation, 2019; AI Act Proposal, European Parliament and Council of the European Union, 2021). Meanwhile, private sector corporate entities, the military, and the state continue to develop AI under the radar of any enforceable regulation.
It is unclear how divergent ethical/legal initiatives apply across jurisdictions and alongside national legislation. The rules, principles, and processes are often impenetrable to the ordinary person. Take for example the legal concept of “responsibility” determining who or what will be held liable for any harm/damage caused by the technology, AI has potential to disrupt the attribution and causation chains unless there is always a human who will be held responsible throughout AI design, development, and deployment stages.
Self-learning algorithms and robots present the spectre of harmful and unattributable behaviors, which at the same time undermine human agency of foresight, prudence, and judgement in taking action with consequences in mind. Although responsibility is a priority ethical value and legal requirement contained in several global, regional, and national regulatory frameworks, its interpretation and implementation differs.
The UK recognizes legal responsibility, accountability, and legal liability as key issues in application of the law to AI, but focuses on developing principles of accountability and intelligibility (which are not the same as legal responsibility or liability) with possible review of the adequacy of existing legislation on legal liability (UK House of Lords Select Committee, 2018). For China, although responsibility is a core principle applicable at both the AI development and deployment stages, it is situated within an ethical framework biased toward commercial exploitation for the purpose of domestic economic growth.
It is unclear who or what will be held legally responsible, and future policies/laws may contain a commercial intellectual property/trade secrets exemption preventing disclosure of algorithmic models, datasets, and algorithmic reasoning (Standards Administration of China, 2018).

2.2 Freeing or Enslaving?

Whether AI-based solutions to everyday tasks are freeing or enslaving impacts on the crisis of the self. Does AI free up the human mind to undertake qualitative judgement-based complex tasks instead of routine memorizing numbers, memory recall, and mental arithmetic? Or, is more time spent frustrated by the technology (how it works, errors it produces, and rectification of errors and seeking redress)? In theory, more AI-assisting jobs should be available leaving routine tasks to machines. In practice, such jobs are few and far between with not enough training offered by employers to make the transition from displacement by machine to human-machine teaming (e.g., Semuels, 2020).
Among other mental tasks, recall and mental arithmetic stimulate the brain. Arguably, if we become dependent on technology for the simplest of tasks, we are enslaved by the technology and forget how to function. Automation bias is a manifestation of such enslavement whereby in human-machine tasks, the human operator favors the machine’s response over their own judgement with major repercussions for lives and livelihoods (Cummings, 2004; Raja & Dietrich, 2010).
De-skilling may also occur through automata behavior exhibited in humans reduced to binary responses without independent critical thinking or judgement. Studies show that heavy use of digital technologies cause neurological changes that impede comprehension, retention, and deeper thinking (DeStefano & LeFevre, 2007; Small &Vorgan, 2008; Sweller, 1999; Zhu, 1999). This diminishes human agency and dignity with potentially serious repercussions for other humans. Remote pilots of unmanned armed aerial vehicles, for instance, thousands of miles away from conflict zones viewing video images of targets to select and attack, have been shown to exhibit moral disengagement and lack of deeper thinking. They are less fearful of being killed and less inhibited to kill. They have problems identifying targets, and reduced situational awareness in complex scenarios resulting in civilian...

Table of contents