AI vs Humans
eBook - ePub

AI vs Humans

  1. 352 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

About this book

The great majority of books on artificial intelligence are written by AI experts who understandably focus on its achievements and potential transformative effects on society. In contrast, AI vs Humans is written by two psychologists (Michael and Christine Eysenck) whose perspective on AI (including robotics) is based on their knowledge and understanding of human cognition.
This book evaluates the strengths and limitations of people and AI. The authors' expertise equips them well to consider this by seeing how well (or badly) AI compares to human intelligence. They accept that AI matches or exceeds human ability in many spheres such as mathematical calculations, complex games (e.g., chess, Go, and poker), diagnosis from medical images, and robotic surgery.
However, the human tendency to anthropomorphise has led many people to claim mistakenly that AI systems can think, infer, reason, and understand while engaging in information processing. In fact, such systems lack all those cognitive skills and are also deficient in the quintessentially human abilities of flexibility of thinking and general intelligence.
At a time when human commitment to AI appears unstoppable, this up-to-date book advocates a symbiotic and co-operative relationship between humans and AI. It will be essential reading for anyone interested in AI and human cognition.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access AI vs Humans by Michael W. Eysenck,Christine Eysenck in PDF and/or ePUB format, as well as other popular books in Psychology & Artificial Intelligence (AI) & Semantics. We have over one million books available in our catalogue for you to explore.

Chapter 1Brief history of AI and robotics

DOI: 10.4324/​9781003162698-1
Artificial intelligence is important: worldwide spending on AI is over $40 billion. Unsurprisingly, there are more books on artificial intelligence than you can shake a stick at. Our book is different because we are psychologists and so well placed to compare AI's achievements against our knowledge of human cognition and intelligence.

Human dominance

How important are humans in the grand scheme of things? At one time, it seemed obvious we were very special. We dominated every other species, the Earth was the centre of the universe, and we were far superior to all other species because we possessed souls and minds.
Billions of religious people (for totally understandable reasons) continue to believe in the specialness of the human species. However, several scientific discoveries have cast doubt on it. First, we can no longer pretend the Earth is of central importance in the universe. The entire universe is approximately 93 billion light-years in diameter (and will be even larger before we finish writing this sentence).
It is a sobering thought that there are at least 100 billion galaxies in the universe (possibly twice as many – but what's 100 billion between friends?), and the Earth forms a minute fraction of one galaxy. If you hold a grain of sand up in the air, the tiny area of the sky it covers contains approximately 10,000 galaxies. Even within our own galaxy (the Milky Way), the Earth is minute: approximately 17 billion Earths could fit into it!
As the American theoretical physicist Richard Feynman pointed out, “It doesn’t seem to me that this fantastically marvellous universe, this tremendous range of time and space and different kinds of animals, and all the different planets, and all these atoms with all their motions, and so on, all this complicated thing can merely be a stage so that God can watch human beings struggle for good and evil – which is the view that religion has. The stage is too big for the drama” (cited in Gleick, 1992).
Second, the biologist Charles Darwin argued persuasively that the human species is less special and unique than believed prior to his theory of evolution published in The Origin of Species (1859). Subsequently, research has identified surprisingly great similarities between the human and other species and even plants. For example, you may well have heard that we share 50% of our DNA with bananas. That is actually totally wrong. In fact, we share only 1% of our DNA with bananas (that's a relief!). However, the bad news is that we share 50% of our genes with bananas. Even worse, we share 70% of our genes with sea sponges. That puts us in our place but sea sponges may regard it as promising news.
What is the difference between DNA and genes? Our genome consists of all the DNA in our cells: we have approximately 3 billion base pairs of DNA. Genes are those sections of the genome fulfilling some function (e.g., determining eye colour). Humans have approximately 23,000 genes, but these genes form less than 2% of the 3 billion base pairs of DNA we have. Bizarrely, most of our DNA has no obvious use and is often described as “junk DNA.” In fairness, it should be pointed out that geneticists are increasingly discovering that some so-called “junk DNA” is more useful than implied by that derogatory term.
Humans also have numerous pseudogenes – sections of DNA that resemble functional genes but are themselves non-functional. Here is an example. Humans deprived of vitamin C (e.g., sailors experiencing a very limited diet while at sea) often develop a nasty disease caused scurvy. This causes them to bleed profusely and their bones to become brittle, often followed by a painful death.
In contrast, the great majority of animal species do not suffer from scurvy or scurvy-like conditions. These species have genes ensuring they produce plenty of vitamin C in their livers meaning they are not dependent on eating food containing that vitamin. Frustratingly, humans have all the genes required to produce vitamin C but one of them (the GULO gene) is broken and so of no use. What has happened during the course of evolution is analogous to remove the spark plug from a car (Lents, 2018): nearly everything that should be there is present but the missing bit is crucial.
How should we respond to the various challenges to human specialness discussed above? We could focus on our superior powers of thinking and reasoning. Indeed, those powers (rather than our superior size or strength) have made us the dominant species on Earth. In recent years, however, the comforting belief that humans are the most intelligent entities on Earth has been increasingly questioned. The two chess matches between Garry Kasparov (the Russian grandmaster then the highest rated chess player of all time) and Deep Blue, an IBM computer, formed a major turning point (see Figure 1.1).
Figure 1.1
Figure 1.1 One of the two racks of IBM's Deep Blue, which beat Garry Kasparov, the world champion in 1997.
In the first match (held in 1996), Kasparov triumphed by three games to one. As a result, he was confident ahead of the second match a year later. He had a discussion with Chung-Jen Tan, the scientist managing IBM's team. When Tan said IBM was strongly focused on winning the match, Kasparov replied, “I don’t think it's an appropriate thing to discuss the situation if I lose. I never lost in my life.”
The second match was epoch-making. With one game to go, Kasparov and Deep Blue were level. Thus, the final game on 11 May 1997 was absolutely crucial. Kasparov was beaten by the computer in 19 moves – the first time in his entire chess-playing career he had ever lost a game in under 20 moves. Never before had a computer beaten the human world champion at a complex intellectual pursuit. As the Guardian newspaper wrote, Kasparov had been, “humbled by a 1.4-ton heap of silicone … It is a depressing day for humankind.”
What does the future hold? Ray Kurzweil (2005), an American expert in AI, predicted that by 2045 computers will be a billion times more powerful than all the 8 billion human brains put together! Blimey, is Ray for real? Can he possibly be right? Admittedly, some of his predictions have been spot on. He accurately predicted in 1990 that a computer would defeat the World Chess Champion by 1998. He also accurately predicted a gigantic increase in use of the Internet (Google is now used by a billion people every day) well before it became popular.
Kurzweill been strongly endorsed by Bill Gates, who describes him as, “the best person I know at predicting the future of artificial intelligence.” However, other experts are less positive. According to the American cognitive scientist Doug Hofstadter, Kurzweil has proposed, “a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It's as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what's good or bad.”

Artificial intelligence

This book concerns the relationship between human powers and abilities and those of machines powered by AI. What exactly is “AI”? According to Andrew Colman's (2015) Oxford Dictionary of Psychology, it is, “the design of hypothetical or actual computer programs or machines to do things normally done by minds, such as playing chess, thinking logically, writing poetry, composing music, or analysing chemical substances.”
There are two radically different ways machines powered by AI might produce outputs resembling those of humans. First, machines could be programmed to model or mimic human cognitive functioning. For example, AI programs can solve many problems using strategies closely resembling those used by humans. A major goal of this approach is to increase our understanding of the human mind.
Historically, the first program showing the value of this approach was the General Problem Solver devised by Allen Newell, John Shaw, and Herb Simon (1958). Their computer program was designed to solve several problems, one of which was the Tower of Hanoi. In this problem, there are three vertical pegs in a row. Initially, there are several pegs on the first peg, with the largest disc at the bottom and the smallest one at the top. The task is to finish up with the discs all arranged with the largest at the bottom and the smallest at the top on the last peg. Only one disc can be moved at a time and a larger disc must never be placed on top of a small one.
Humans have limited short-term memory capacity, and so they typically engage in relatively little forward planning on problems such as the Tower of Hanoi. Newell et al. (1958) managed to produce a program using processing strategies resembling those of humans.
Second, machines could simply be programmed to perform complex tasks (and easy ones, too) totally ignoring the cognitive processes humans would use. The chess computer Deep Blue that beat Garry Kasparov exemplifies this approach. It had fantastic computing power, evaluating up to 200 million chess positions per second. Thus, Deep Blue's huge advantage was fantastic processing speed rather than the cognitive complexity of its operations.
AI systems could also in principle be programmed to mimic major aspects of the human brain's physical functioning. The ultimate goal here is to devise AI systems possessing “biological plausibility” (van Gerven & Bohte, 2017). Some progress has been made in this direction. For example, deep neural networks (discussed in detail shortly) are used extensively in AI. They are called neural networks because there is some similarity between their structure and the relationships among biological neurons in the human brain. However, the differences are much greater than the similarities. Biological neurons are far more complex than the neurons in deep neural networks and our brains contain a staggeringly large number of neurons (approaching 100 billion). More generally, those who devise deep neural networks, “usually do not attempt to explicitly model the variety of different kinds of brain neurons, nor the effects of neurotransmitters and hormones. Furthermore, it is far from clear that the brain contains the kind of reverse connections that would be needed if the brain were to learn by a process like backpropagation [using information about errors to enhance performance]” (Garson, 2019).

History of artificial intelligence

The term “artificial intelligence” was coined by McCarthy et al. (1955). They defined it as a machine that behaves, “in ways that would be called intelligent if a human were so behaving.” However, Herb Simon (who won the Nobel Prize for Economics) argued the term “complex information processing” was preferable.
Much of this book is relevant to the issue of whether genuine intelligence is involved in the area generally known as “artificial intelligence.” What is “intelligence?” It is the ability to behave adaptively and to solve novel problems. Of crucial importance, intelligence is a general ability that is displayed with respect to numerous very dissimilar new problems rather than being limited to problems of a single type (e.g., problems in mathematics) (see Chapter 3).
The true origins of AI occurred much earlier than 1955. Ada Lovelace (1815–1852), Byron's daughter, was the world's first computer programmer. She produced the world's first machine algorithm (a set of rules used to solve a given problem) for a computing machine that existed on paper although not actually built during her lifetime.
Figure 1.2
Figure 1.2 Photograph of the brilliant English mathematician and computer scientist at the age of 16.
Approximately 100 years later, in 1937, Alan Turing (1912–1954; see Figure 1.2) published an incredibly far-sighted article. He speculated that it should be possible to build machines that could solve any problem humans could using only 0s and 1s. Most famously, he subsequently developed a code-breaking machine (the Bombe) that weighed a ton. It was the world's first electro-mechanical computer, and it deciphered the Enigma code used by the German Army during the Second World War to encode important messages. The information obtained from the Bombe reduced considerably the numbers of Allied ships sunk by German submarines (U-boats).
Computer programs make extensive use of algorithms. What is an algorithm? In essence, it is a set of instructions providing a step-by-step procedure for solving numerous logical and mathematical problems. Here is a simple example of an algorithm designed to add two two-digit numbers (e.g., 46 + 79). The first step is to add the tens (40 + 70 = 110); the second step is to add the ones (6 + 9 = 15); the third and final step is to add the outcomes of the first two steps (110 + 15 = 125). Thus, the answer is 125.
AI developed considerably between 1945 and 1975. However, there were relatively few major breakthroughs, much over-hyping of AI's future prospects, and the cost of research ...

Table of contents

  1. Cover Page
  2. Half-Title Page
  3. Title Page
  4. Copyright Page
  5. Dedication Page
  6. Contents
  7. Preface
  8. 1 Brief history of AI and robotics
  9. 2 AI dominance
  10. 3 Human strengths
  11. 4 How (un)intelligent is AI?
  12. 5 Human limitations
  13. 6 Robots and morality
  14. 7 And the winner is?
  15. 8 The future
  16. References
  17. Index