Robot Ethics and the Innovation Economy
eBook - ePub

Robot Ethics and the Innovation Economy

  1. 124 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Robot Ethics and the Innovation Economy

About this book

This book provides an authoritative resource on the topic of intelligent robots, artificial intelligence and the ethical implications of these revolutionary innovations. It examines the moral and ethical problems that arise in relation to the development, design and use of intelligent robots, which are capable of autonomous or semi-autonomous decision-making. These problems might relate, for example, to medical robots, driverless cars, intelligent military drones, pedagogical robots, police robots, legal robots and many others.

The main question addressed in this book is how we can understand, explain and apply the concept of ethics in relation to intelligent robots and artificial intelligence. In each chapter, the author examines a different aspect of this question. The author also questions how we can ensure that intelligent robots are of service to humans and under what conditions intelligent robots could become more ethical than humans. The book employs an original approach to examining this cutting-edge research question, combining different research areas, and offers a wealth of practical relevance and real-world examples, illustrated through vivid case studies. With its jargon free approach and a dedicated chapter on relevant concepts at the end, this book is also accessible to readers without prior knowledge on intelligent robots and the Fourth Industrial Revolution.

By providing a general account of this debate, and of the consequences of the innovations resulting from these trends, the book serves as an important contribution to the discussion and will find a natural readership among scholars and students of the innovation economy and those concerned with the ethical considerations arising in the wake of the Fourth Industrial Revolution

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Robot Ethics and the Innovation Economy by Jon-Arild Johannessen in PDF and/or ePUB format, as well as other popular books in Business & Etica aziendale. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2021
eBook ISBN
9781000398595
Edition
1

1

Intelligent robots and ethics

The key ideas of the chapter

1.Intelligent robots will be increasingly able to reflect on ethics, and will be more able to act ethically than humans.
2.Responsibility for a mistake made by a military robot, for example an incident where civilians, sick people or children are injured or killed, lies with the drone pilot who is operating the robot. An analogous situation would be that of a car driver in a situation where the car brakes fail.
3.Through the development of the intelligent robot, we have become what we actually are: rational, logical, instrumental actors who value what can be weighed, measured and counted. We have created intelligent robots in our own image, and we are frightened by what we see.
4.The ethical mirror: fear of intelligent robots is nothing more than fear of what we are turning into as humans: rational, logical agents who do not take into account the intelligence of a smile, emotions or the touch of a hand.
5.In intelligent robots, ethical programming is structured hierarchically as three levels of logic: the contextual and cultural level; the situational level; and the operational level.
6.Intelligent robots reflect on unethical actions performed by unintelligent humans.
7.Under certain conditions, intelligent robots can have autonomous moral responsibility. This is an innovation in the context of moral philosophy.

Introduction

This book is about the moral and ethical problems that arise in relation to the development, design and use of intelligent robots, which are capable of autonomous or semi-autonomous decision-making. These problems might relate, for example, to medical robots, driverless cars, intelligent military drones, pedagogical robots, police robots, legal robots and so on.
Robot ethics is a branch of applied ethics, which examines the ethical problems that arise when intelligent robots are designed and applied in practice. The field of robot ethics is also referred to as roboethics (Veruggio, 2005: 1–4; Operto, 2011). Ethics concerns whether actions are right or wrong.
How can we ensure that intelligent robots are of service to humans? Under what conditions could intelligent robots be more ethical than humans? These are the questions that we examine in this book.
Ethics is a field with many theories and perspectives. First, there are meta-ethics, normative ethics and applied ethics. In this book, we are concerned with applied ethics. Second, there are various ethical theories, including virtue ethics, deontological theory, utilitarian theory, justice-and-fairness theory, egoism theory, value-based theory and case-based theory, to name just the most well-known. In this book, our discussions are based on a systemic perspective on ethics. We have described and analysed this perspective in Appendix 1 and Appendix 2.
Can intelligent robots learn to act in a moral manner? Are intelligent robots responsible for their actions? Under what conditions can intelligent robots perform moral actions? Can intelligent robots have a sense of morality? Can robots be viewed as moral agents? Much attention has been devoted recently to these questions (Rodogno, 2016; Coeckelbergh, 2014; Gunkel, 2014). According to these authors, the answer is no. In other words, robots cannot be viewed as moral agents. A moral agent is considered to have the capacity to reflect, evaluate, make rational choices, decide and act in a given situation (Cave, 2002: 4). Philosophers also suggest that free will is a criterion for moral agency (Shaun & Knobe, 2007: 663–685). If you have moral responsibility, you are a moral agent, and vice versa.
Moral responsibility is not necessarily linked to legal responsibility. We can assume that in the near future, intelligent robots and informats1 will be capable of being considered to be moral agents in accordance with the above definition. So does this mean that they are also responsible for their actions? We would probably all consider it absurd for an intelligent robot to be prosecuted and punished for its actions.
If we frame the question differently, however, we get a different perspective. Can robots perform moral actions? When considering this question, we are only interested in whether robots can make decisions based on what action would be morally right or wrong in a given situation. We are talking here about intelligent robots and informats. Although intelligent robots and informats can perform moral actions, it would be meaningless to say that robots are morally responsible (Asimov, 2008). It is equally meaningless to talk about robots having rights on a par with humans. In the not-so-distant future, we can envisage some humans having some of their organs replaced with nanorobots. But we can also imagine humans having other types of nanorobots added to their bodies in order to enhance their performance in some way or other. When humans are the starting point, then the answer is that they are both moral agents and responsible for their actions, regardless of how much technology is implanted in their bodies. Obviously, electronic linking structures are not moral per se. They do become part of a moral system, however, if they are structurally linked to a human being (Bunge, 2013).
We encounter somewhat similar problems in relation to tools used in genome engineering and social robots used in the healthcare sector (Coeckelbergh, 2010; De Grey, 2013). These tools can make major changes to our “natural” genetic composition and can change human performance and pathologies (Doudna & Sternberg, 2018).
Acting in a way that is morally wrong involves the feelings and emotions of others, at one level or another (Rodogno, 2016: 42). Accordingly, a simple definition of morally wrong behaviour could be that one is oppressing other individuals, in one way or another. We could simplify the definition of morally correct behaviour to say that it means showing respect for others, taking responsibility for others and treating others in a dignified fashion (RRD)2 (Benhabib, 2004). From this perspective, we can say that robots and informats can act in ways that are morally right and wrong. This becomes clear if we envisage intelligent drones as “killing machines”.
If a robot is designed so that it can act rationally, and all feelings and emotions are designed out of the programs that control it, then the robot is approaching the classic definition of a psychopath – a person who is highly rational but lacks normal emotions (Ronson, 2011). It is a long road from the Turing test of the 1950s, which required a human interacting with a robot to believe that he or she was interacting with a human, to reaching this technology singularity (Shanahan, 2015). In the latter case, robots will have super-intelligence. They will design their own code through a process of learning. It seems reasonable to assume that artificial intelligence, robots and informats will have decisive significance, both for moral actions and for how they affect society. The decision to design artificial intelligence, social robots, informats, medical robots, military robots and so on is an ethical choice. When this technology attains technological singularity, it will also perform moral actions in the context of RRD (respect, responsibility, dignity). Just as with the climate crisis, it is imperative that we act now to impose limits on what is permissible in the development of artificial intelligence. Just as with the climate crisis, it will be too late when we have reached the point of no return. Technological singularity is expected to occur in around 2040.3 The period from today to 2040 is our window of opportunity. After 2040, it may be too late (Shanahan, 2015).4
This chapter is intended to provide an introduction to the central concepts discussed in this book. It is also intended to provide a general overview of the main question and the subsidiary research questions that we examine in this book. In this introductory chapter, we will touch in a general way on each of the chapters in the book, so that readers gain an idea of what they will encounter in the respective chapters later on in the book, where the questions are considered in more detail. In other words, Chapter 1 not only summarizes what this book is about, but is also a free-standing investigation of the questions listed below.
The question we examine in this chapter is as follows: how can we understand, explain and apply ideas about ethics to intelligent robots and artificial intelligence?
In order to tackle this general question, we have developed three subsidiary research questions:
1.What is robot ethics?
2.How can we reflect on artificial intelligence and ethics?
3.What ethical problems will be caused by the application of intelligent robots, artificial intelligence and genome editing in the healthcare sector?
We have summarized the introduction in Figure 1.1, which also illustrates how we have structured this chapter, as well as showing how this book is structured overall.
Figure 1.1Innovation and ethics.

Robot ethics

The question we investigate here is as follows: what is robot ethics?
To paraphrase a quote from Aristotle, one could say that the best way to learn about acting ethically is to make it one’s everyday practice (Tzafestas, 2016: 1). One could imagine this being embedded into the algorithms of intelligent robots. A basic ethical algorithm in all intelligent robots could be something like this: if an ethical situation arises, then always choose the behaviour that is considered ethically good. The meaning of ethically good behaviour in different contexts would then have to be programmed into the robot like a meta-algorithm.
Robot ethics relates directly to the ethics of technology and the ethics of innovation (Allen et al., 2006: 12–17; Hall, 2000: 28–46). Robot ethics is linked to the development, design and application of intelligent robots in society. As a general rule, robot ethics is concerned with autonomous robots, i.e., robots that are not linked to a system (Tzafestas, 2016: 2). It is our opinion here that having intelligent robots linked up through a global artificial intelligence network would be more expedient. The reason for this is that intelligent robots would then be able to learn more easily from other robots through a process of trial and error, as well as from other robots’ behaviours and responses in different situations.
Is it ethically correct to program ethical codes into intelligent robots? Surely, ethics is only concerned with the human domain? Let us imagine a situation where an intelligent robot has ethical codes written into its program such as a semi-automated car. If the car drives into someone on a crowded street, who is then responsible for the injury suffered by the accident victim? Is it the robot, the designer, the “driver” of the car or the owner of the car? Can a robot be held responsible for its “actions”? If so, then the robot will be summoned to court, be judged and possibly sentenced. Of course, this sounds completely absurd – that a car would be summoned to court to face justice! What about the person who designed the algorithm installed in the vehicular robot? Is that person responsible for the “actions” of the robot? Or is it the business for whom the designer works that is responsible? We see the questions starting to mount up – questions to which we today have no clear answers. The most obvious answer is that the person who owns and “drives” the car is the one who will most probably be held responsible.
To simplify the problem, let us assume that it is the car’s owner who is “driving” the car. In such a case, it is the driver who is responsible at all times for the injuries inflicted on third parties, even if the intelligent robot took over control of the car in a critical situation and acted “ethically”. It is quite possible that the company that designed the algorithm will be the one judged guilty in a court trial, but in the first instance, the responsibility rests on the “driver”. In this case, we have considered that it will be the “driver” of the car who will initially be held responsible, and then possibly the company.
In other technologies, it may be more complicated to decide who should be held responsible in the case of an accident or injury. Suppose an intelligent robot diagnoses and performs an operation on a patient. Who will be held responsible if the diagnosis is incorrect and the patient is permanently injured by the operation? In the first instance, we assume that it is the doctor who assists the robot who is responsible. In such a case, similar to the driver of the car, it is the doctor who will be held responsible for the operation that resulted in a permanent injury to the patient. Ultimately, it will be the hospital that is the legal subject and will be held responsible in a court case. However, let us imagine a future where there is no do...

Table of contents

  1. Cover
  2. Half Title
  3. Series Information
  4. Title Page
  5. Copyright Page
  6. Table of Contents
  7. List of Figures
  8. Foreword
  9. 1 Intelligent robots and ethics
  10. 2 Robots and ethics
  11. 3 AI and robot ethics
  12. 4 Robotization and medical ethics
  13. Appendix 1: Chapter on concepts
  14. Appendix 2: Systemic thinking
  15. Index