Chapter 1
Do technology and progress necessarily improve life?
The often-cited counterexample is the atomic bomb. Physics made big leaps between the nineteen and twentieth centuries. We owe many of the following technological developments to the laws of relativity, quantum mechanics, and semiconductorsâall theories that originated back then. Otto Hahn, Lise Meitner, and Fritz Strassman discovered nuclear fission in a laboratory in Berlin, Germany, in 1938 (History.com, 2020). This made the first atomic bomb possible and led to the discovery of an efficient, large-scale source of clean energy.
Going to a less dramatic example, I am a millennial born in the eighties, and I was a teenager between the nineties and early 2000s. The first time I used the internet was around 1997. I remember arguing with friends about Netscape vs. Explorer, downloading music on Napster, and messaging friends on MSN, but that was just a closed circle of geeks. Those tools were not nearly as popular as todayâs messaging platforms, browsers, and social media platforms.
When I wanted to get together with my friends, we used the so-called telephone chains (or telephone trees). One person was in charge of starting the chain. She would contact two people, who would then contact two people themselves, and so on until all people were contacted. One person decided a time and location, and everyone would meet there. Agreement was reached in a couple of hours, and I would go out the next day, certain I would meet my friends and have a great night. It was beautifully simple.
Today we use WhatsApp groups. Nobodyâs in charge of starting or setting anything. Some people will start proposing places and times, then others will start debating the day or the time that suits them best. It takes several messages and several hours to finally agree on a place and time, usually several days ahead from the event because syncing up everyoneâs agenda is like arranging a G8 meeting between prime ministers. A few days ahead of the meet up, we get the usual people trying to sabotage the event. They managed to mix up too many meetups, so they try to rearrange their agendas. Some people get upset and leave the group. Then a few private conversations spin off the group to gossip or talk badly about the saboteurs and whether they should be cut loose. The meetup gets postponed. When the day finally arrives, a few hours before the event, you start getting a few âIâm sorry, I canât make itâ messages. A few hours after the agreed time, I would get the occasional âSorry, Iâm a bit late, can you share the location?â
It takes several days, mental strain, and broken friendships just to agree on a night out. Technology and progress donât necessarily improve life.
New means of communication like WhatsApp, in my personal experience, seem to have brought a decrease in âperceivedâ responsibilities. Too much communication that is free for all makes it hard to commit or make conversations meaningful. While this has not always been the case throughout the history of human development, AI might work in a different way compared to other technological advancements.
Historian Yuval N. Harari makes an interesting argument regarding human progress, or lack thereof. For most of our 2.5 million years as a species, humans had a hunter-gatherer lifestyle. Ten thousand years ago, agriculture altered the course of sapiensâ history. Harari explains that this was not progress: âThe Agricultural Revolution certainly enlarged the sum total of food at the disposal of humankind, but the extra food did not translate into a better diet or more leisure. Rather, it translated into population explosions and pampered elites. The average farmer worked harder than the average forager and got a worse diet in returnâ (Harari, 2014).
Agriculture enabled sapiens to grow in number, but at a disastrous cost: less leisure, more work, a more inadequate diet, and apparently shorter lifespans.
An agricultural civilization also meant switching from a nomadic lifestyle to settling down into defined areas for the long term. So, we started clearing forests, diverting streams, growing crops, taming animals, and building permanent structures. These activities and systems fathered the need for more complex social and organizational networks, paving the way for cities, states, and eventually empires.
Unfortunately, the unpretentious farmer had to abdicate much of his surplus yield to the rulers, who often ran nothing more than extortion rackets. Harari concludes, âThis is the essence of the Agricultural Revolution: the ability to keep more people alive under worse conditions.â
While Harari runs over many oversimplifications (which is expected for a history of Homo sapiens in just above four hundred pages) and does not present any evidence that hunter-gather societies were happier than rural ones, he points out an interesting concept: significant historical changes and revolutionâbig and smallâmay sometimes worsen the human condition.
AI is often described as a revolutionary technology, something that will change many things. As we will see later, there are some overstatements and some truths to that. What I want to underline at this point is that weâre still in a historical moment where we can step back and reflect on how we want to develop this technology further. And we definitely donât want to end up worse off.
Letâs start by first appreciating where AI comes from by showing how AI developed at the crossroads of many disciplines. The fundamental disciplines and processes that culminated into AI include philosophy, mathematics, economics, neuroscience, psychology, computer engineering, control theory, cybernetics, and linguistics.
The Multidisciplinary Origin of AI
The foundations of artificial intelligence can be traced back many centuries, beginning with ancient philosophers.
The Greek philosopher Aristotle (384â322 BC) formulated the laws that govern the human mindâs rational side. His system of syllogism consisted of providing a way to generate conclusions mechanically, given initial premises. An example of syllogism would be, âAll cars have wheels. I drive a car. Therefore, my car has wheels.â
The field of philosophy influenced AIâs birth by tackling questions like the relationship between the brain and the mind, thinking, knowledge, and the relationship between knowledge and action. Weâll discuss in later chapters how philosophy is still influential today, especially when it comes to moral decision-making.
The critical influence of Aristotleâs philosophy was that if good reasoning shall follow logical and mechanical laws, it can be replicated by an engineering artifact.
Fast-forward to the fifteenth century, where the French philosopher RenĂ© Descartes (1596â1650) was the most important figure for understanding the original principles of modern scientific thinking. He was the first to formalize the distinction between mind and matter.
A few problems arise from this conception of the world. Stuart Russell and Peter Norvig, in their classic computer science textbook Artificial Intelligence: A Modern Approach, explain that a purely physical conception of the mind leaves little room for free will. If the human mind behaves logically and mechanically like an Aristotelian syllogism, every decision is an automated deduction. Free will would be just a perception of the way the available choice appears to the choosing entity.
It is worth noting that Descartes was also a proponent of âdualismâ: the notion that there is a part of the human mind (or soul or spirit) that is outside of nature, exempt from physical laws. Animals, on the other hand, did not possess this dual quality; they could be treated as machines. This view culminated with evolutionary thinking developing a few centuries later: humans were considered no different than animals (Beckermann, 2010). Consequently, humans too could be treated like machines.
Walking through history toward the Modern Age, we see how mathematics, economics, and many other modern sciences contributed to the field.
Mathematics gave to AI formal rules to drive conclusions, defining what can be computed. Statistics, a branch of mathematics, formalized reasoning and developed more precise methods for calculating what we can discern from uncertain information. Statistics was particularly influential in modern AI. In fact, most of the techniques known as machine learning have a statistical foundation, as youâll learn later on.
Economics investigates problems like decision-making for maximizing payoff, what to do when there are multiple stakeholders maximizing different values, and what happens when these objectives materialize in long timeframes. The science of economics started in 1776, when Scottish philosopher Adam Smith (1723â1790) published An Inquiry into the Nature and Causes of the Wealth of Nations. Smith was the first to treat the subject as a science, using the idea that economies can be thought of as individual agents maximizing their own economic well-being.
Smithâs view of economics is still the most influential among mainstream economists. Some argue its limited definition of what a human person is stands at the root of many problems we have today regarding how companies operate. Focusing on tech companies using AI has a dramatic impact, as weâll see later.
Neuroscience, the study of the nervous system, particularly the brain, studies how brains process information and inspired modern AI computational approaches like neural networks.
Psychology studies how humans and animals think and act. There have been mutual influences between psychology and computer science involving the same academics who are considered the fathers of AI and who started the field of cognitive science at MIT in the 1950s. Today, a common (although far from universal) view among psychologists is that âa cognitive theory should be like a computer programâ (Anderson, 1980). The recent development of behavioral science has influenced how modern AI products are being developed by big tech firms and modern startups.
Fields related to engineering and language complete the spectrum of influencers for the AI field. For artificial intelligence to succeed, we need two things: intelligence and an engineering artifact. The computer has been the best candidate for the artifact. Building increasingly efficient computers is a crucial part of developing AI. As a branch of computer science, AI itself influenced how to construct efficient machines. Control theory and cybernetics studies how engineering artifacts c...