The Ethics of AI
eBook - ePub
Available until 23 Dec |Learn more

The Ethics of AI

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub
Available until 23 Dec |Learn more

The Ethics of AI

About this book

How often have you heard that we must fear an AI-driven apocalypse?

That one day the robots will take over? That we will lose our freedoms and follow the leadership of a ruthlessly efficient overlord? This is what we hear on a daily basis, but is there any truth to those claims?

The Ethics of AI: Facts, Fictions, and Forecasts seeks to explore those questions, and brings in the research of experts such as moral philosopher and author Jonathan Haidt, former lead of psychology at Cambridge Analytica Patrick Fagan and Charles Radclyffe, founder and CEO of the first rating agency for ESG/Ethical AI. Each chapter explores the fundamental aspects of AI and their history, challenging your perspectives on what AI is, and could become. We must keep pace with the rapid development of technology, placing morality and ethics at the forefront when scoping the development of AI applications.

The Ethics of AI explores the intersection of AI, STEM, humanities and ethical formation. If you are a follower of modern technology, work with AI in any capacity or just love sci-fi references, this book belongs in your library.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access The Ethics of AI by Alberto Chierici in PDF and/or ePUB format, as well as other popular books in Informatique & Industrie informatique. We have over one million books available in our catalogue for you to explore.

Information

Chapter 1

The Origins of AI

Do technology and progress necessarily improve life?
The often-cited counterexample is the atomic bomb. Physics made big leaps between the nineteen and twentieth centuries. We owe many of the following technological developments to the laws of relativity, quantum mechanics, and semiconductors—all theories that originated back then. Otto Hahn, Lise Meitner, and Fritz Strassman discovered nuclear fission in a laboratory in Berlin, Germany, in 1938 (History.com, 2020). This made the first atomic bomb possible and led to the discovery of an efficient, large-scale source of clean energy.
Going to a less dramatic example, I am a millennial born in the eighties, and I was a teenager between the nineties and early 2000s. The first time I used the internet was around 1997. I remember arguing with friends about Netscape vs. Explorer, downloading music on Napster, and messaging friends on MSN, but that was just a closed circle of geeks. Those tools were not nearly as popular as today’s messaging platforms, browsers, and social media platforms.
When I wanted to get together with my friends, we used the so-called telephone chains (or telephone trees). One person was in charge of starting the chain. She would contact two people, who would then contact two people themselves, and so on until all people were contacted. One person decided a time and location, and everyone would meet there. Agreement was reached in a couple of hours, and I would go out the next day, certain I would meet my friends and have a great night. It was beautifully simple.
Today we use WhatsApp groups. Nobody’s in charge of starting or setting anything. Some people will start proposing places and times, then others will start debating the day or the time that suits them best. It takes several messages and several hours to finally agree on a place and time, usually several days ahead from the event because syncing up everyone’s agenda is like arranging a G8 meeting between prime ministers. A few days ahead of the meet up, we get the usual people trying to sabotage the event. They managed to mix up too many meetups, so they try to rearrange their agendas. Some people get upset and leave the group. Then a few private conversations spin off the group to gossip or talk badly about the saboteurs and whether they should be cut loose. The meetup gets postponed. When the day finally arrives, a few hours before the event, you start getting a few “I’m sorry, I can’t make it” messages. A few hours after the agreed time, I would get the occasional “Sorry, I’m a bit late, can you share the location?”
It takes several days, mental strain, and broken friendships just to agree on a night out. Technology and progress don’t necessarily improve life.
New means of communication like WhatsApp, in my personal experience, seem to have brought a decrease in “perceived” responsibilities. Too much communication that is free for all makes it hard to commit or make conversations meaningful. While this has not always been the case throughout the history of human development, AI might work in a different way compared to other technological advancements.
Historian Yuval N. Harari makes an interesting argument regarding human progress, or lack thereof. For most of our 2.5 million years as a species, humans had a hunter-gatherer lifestyle. Ten thousand years ago, agriculture altered the course of sapiens’ history. Harari explains that this was not progress: “The Agricultural Revolution certainly enlarged the sum total of food at the disposal of humankind, but the extra food did not translate into a better diet or more leisure. Rather, it translated into population explosions and pampered elites. The average farmer worked harder than the average forager and got a worse diet in return” (Harari, 2014).
Agriculture enabled sapiens to grow in number, but at a disastrous cost: less leisure, more work, a more inadequate diet, and apparently shorter lifespans.
An agricultural civilization also meant switching from a nomadic lifestyle to settling down into defined areas for the long term. So, we started clearing forests, diverting streams, growing crops, taming animals, and building permanent structures. These activities and systems fathered the need for more complex social and organizational networks, paving the way for cities, states, and eventually empires.
Unfortunately, the unpretentious farmer had to abdicate much of his surplus yield to the rulers, who often ran nothing more than extortion rackets. Harari concludes, “This is the essence of the Agricultural Revolution: the ability to keep more people alive under worse conditions.”
While Harari runs over many oversimplifications (which is expected for a history of Homo sapiens in just above four hundred pages) and does not present any evidence that hunter-gather societies were happier than rural ones, he points out an interesting concept: significant historical changes and revolution—big and small—may sometimes worsen the human condition.
AI is often described as a revolutionary technology, something that will change many things. As we will see later, there are some overstatements and some truths to that. What I want to underline at this point is that we’re still in a historical moment where we can step back and reflect on how we want to develop this technology further. And we definitely don’t want to end up worse off.
Let’s start by first appreciating where AI comes from by showing how AI developed at the crossroads of many disciplines. The fundamental disciplines and processes that culminated into AI include philosophy, mathematics, economics, neuroscience, psychology, computer engineering, control theory, cybernetics, and linguistics.

The Multidisciplinary Origin of AI

The foundations of artificial intelligence can be traced back many centuries, beginning with ancient philosophers.
The Greek philosopher Aristotle (384–322 BC) formulated the laws that govern the human mind’s rational side. His system of syllogism consisted of providing a way to generate conclusions mechanically, given initial premises. An example of syllogism would be, “All cars have wheels. I drive a car. Therefore, my car has wheels.”
The field of philosophy influenced AI’s birth by tackling questions like the relationship between the brain and the mind, thinking, knowledge, and the relationship between knowledge and action. We’ll discuss in later chapters how philosophy is still influential today, especially when it comes to moral decision-making.
The critical influence of Aristotle’s philosophy was that if good reasoning shall follow logical and mechanical laws, it can be replicated by an engineering artifact.
Fast-forward to the fifteenth century, where the French philosopher RenĂ© Descartes (1596–1650) was the most important figure for understanding the original principles of modern scientific thinking. He was the first to formalize the distinction between mind and matter.
A few problems arise from this conception of the world. Stuart Russell and Peter Norvig, in their classic computer science textbook Artificial Intelligence: A Modern Approach, explain that a purely physical conception of the mind leaves little room for free will. If the human mind behaves logically and mechanically like an Aristotelian syllogism, every decision is an automated deduction. Free will would be just a perception of the way the available choice appears to the choosing entity.
It is worth noting that Descartes was also a proponent of “dualism”: the notion that there is a part of the human mind (or soul or spirit) that is outside of nature, exempt from physical laws. Animals, on the other hand, did not possess this dual quality; they could be treated as machines. This view culminated with evolutionary thinking developing a few centuries later: humans were considered no different than animals (Beckermann, 2010). Consequently, humans too could be treated like machines.
Walking through history toward the Modern Age, we see how mathematics, economics, and many other modern sciences contributed to the field.
Mathematics gave to AI formal rules to drive conclusions, defining what can be computed. Statistics, a branch of mathematics, formalized reasoning and developed more precise methods for calculating what we can discern from uncertain information. Statistics was particularly influential in modern AI. In fact, most of the techniques known as machine learning have a statistical foundation, as you’ll learn later on.
Economics investigates problems like decision-making for maximizing payoff, what to do when there are multiple stakeholders maximizing different values, and what happens when these objectives materialize in long timeframes. The science of economics started in 1776, when Scottish philosopher Adam Smith (1723–1790) published An Inquiry into the Nature and Causes of the Wealth of Nations. Smith was the first to treat the subject as a science, using the idea that economies can be thought of as individual agents maximizing their own economic well-being.
Smith’s view of economics is still the most influential among mainstream economists. Some argue its limited definition of what a human person is stands at the root of many problems we have today regarding how companies operate. Focusing on tech companies using AI has a dramatic impact, as we’ll see later.
Neuroscience, the study of the nervous system, particularly the brain, studies how brains process information and inspired modern AI computational approaches like neural networks.
Psychology studies how humans and animals think and act. There have been mutual influences between psychology and computer science involving the same academics who are considered the fathers of AI and who started the field of cognitive science at MIT in the 1950s. Today, a common (although far from universal) view among psychologists is that “a cognitive theory should be like a computer program” (Anderson, 1980). The recent development of behavioral science has influenced how modern AI products are being developed by big tech firms and modern startups.
Fields related to engineering and language complete the spectrum of influencers for the AI field. For artificial intelligence to succeed, we need two things: intelligence and an engineering artifact. The computer has been the best candidate for the artifact. Building increasingly efficient computers is a crucial part of developing AI. As a branch of computer science, AI itself influenced how to construct efficient machines. Control theory and cybernetics studies how engineering artifacts c...

Table of contents

  1. Introduction
  2. CHAPTER 1: The Origins of AI
  3. CHAPTER 2: What Is Artificial Intelligence
  4. CHAPTER 3: Machine and Human Learning
  5. CHAPTER 4: Limitations of AI
  6. CHAPTER 5: AI, from Fiction to Behavioral Science
  7. CHAPTER 6: Case Studies
  8. CHAPTER 7: What Should AI Ethics Focus On?
  9. CHAPTER 8: What Is Human: Part I
  10. CHAPTER 9: What Is Human: Part II
  11. CHAPTER 10: Ethical AI or Ethical Humans?
  12. Acknowledgments
  13. Appendix