Evil Robots, Killer Computers, and Other Myths
eBook - ePub

Evil Robots, Killer Computers, and Other Myths

The Truth About AI and the Future of Humanity

  1. 288 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Evil Robots, Killer Computers, and Other Myths

The Truth About AI and the Future of Humanity

About this book

Are AI robots and computers really going to take over the world?

Longtime artificial intelligence (AI) researcher and investor Steve Shwartz has grown frustrated with the fear-inducing hype around AI in popular culture and media. Yes, today's AI systems are miracles of modern engineering, but no, humans do not have to fear robots seizing control or taking over all our jobs.

In this exploration of the fascinating and ever-changing landscape of artificial intelligence, Dr. Shwartz explains how AI works in simple terms. After reading this captivating book, you will understand

• the inner workings of today's amazing AI technologies, including facial recognition, self-driving cars, machine translation, chatbots, deepfakes, and many others;

• why today's artificial intelligence technology cannot evolve into the AI of science fiction lore;

• the crucial areas where we will need to adopt new laws and policies in order to counter threats to our safety and personal freedoms resulting from the use of AI. So although we don't have to worry about evil robots rising to power and turning us into pets—and we probably never will—artificial intelligence is here to stay, and we must learn to separate fact from fiction and embrace how this amazing technology enhances our world.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Evil Robots, Killer Computers, and Other Myths by Steven Shwartz in PDF and/or ePUB format, as well as other popular books in Computer Science & Artificial Intelligence (AI) & Semantics. We have over one million books available in our catalogue for you to explore.
1
THE SOCIAL IMPACT OF AI
In 2011, I watched on TV as the IBM Watson DeepQA computer played a challenge match against two previous Jeopardy! champions. Nerd that I am, I rooted for the machine. I was thrilled to see the computer answer correctly over and over again.
Even though this was a fantastic achievement, I strongly suspected that there was no real intelligence in the underlying IBM technology. I was able to confirm my speculation when IBM published a series of detailed journal articles1 that explained how the technology is mostly a massive set of very clever tricks with no human-level intelligence.
IBM then decided to ride the credibility produced by the Jeopardy! victory and began to rebrand itself around its artificial intelligence (AI) capabilities. IBM marketing claimed that “Watson can understand all forms of data, interact naturally with people, and learn and reason, at scale.”2
The ads made it sound as though technology had progressed to the point of being able to think and reason like people. While I appreciated the engineering achievements Watson demonstrated on Jeopardy!, even Watson’s creators at IBM knew these systems could not think or reason in any real sense.
Since then, AI has blasted its way into the public consciousness and our everyday lives. It is powering advances in medicine, weather prediction, factory automation, and self-driving cars. Even golf club manufacturers report that AI is now designing their clubs. Every day, people interact with AI. Google Translate helps us understand foreign language webpages and talk to Uber drivers in foreign countries. Vendors have built speech recognition into many apps. We use personal assistants like Siri and Alexa daily to help us complete simple tasks. Face recognition apps automatically label our photos. And AI systems are beating expert game players at complex games like Go and Texas Hold ’Em. Factory robots are moving beyond repetitive motions and starting to stock shelves.
Each of these fantastic AI systems enhances the perception that computers can think and reason like people. Technology vendors reinforce this perception with marketing statements that give the impression their systems have human-level cognitive capabilities. For example, Microsoft and Alibaba announced AI systems that could read as well as people can. However, these systems had minimal skills and did not even understand what they were reading.
AI systems perform many tasks that seem to require intelligence. The rapid progress in AI has caused many to wonder where it will lead. Science fiction writers have pondered this question for decades. Some have invented a future in which we have at our service benevolent and beneficial robots. Everyone would like to have an automated housekeeper like Rosie the Robot from the popular 1960s cartoon TV series The Jetsons. We all love C-3PO from the Star Wars universe, who can have conversations in “over six million forms of communication,” and his self-aware-trashcan partner, R2-D2, who can reprogram enemy computer systems. And we were in awe of the capabilities of the sentient android Data in Star Trek: The Next Generation, who was third in command of the starship (although he famously lacked emotion and so had trouble understanding human behavior).
Others have portrayed AI characters as neither good nor evil but with human-like frailties and have explored the consequences of human–robot interactions. In Blade Runner, for example, Rachael the replicant did not know she was not human until she failed a test. Spike Jonze’s Her explores the consequences of a human falling in love with a disembodied humanoid virtual assistant. In Elysium, Matt Damon’s character must report to an android parole officer. In the TV series Humans and Westworld, humanoid robots gain consciousness and have emotions that cause them to rebel against their involuntary servitude.
Many futurists have foreseen evil robots and killer computers—AI systems that develop free will and turn against us. In the 1927 film Metropolis, a human named Maria is kidnapped and replaced by a robot who looks, talks, and acts like her and then proceeds to unleash chaos in the city. In the 1968 book-turned-movie 2001: A Space Odyssey, the spaceship has a sentient computer, HAL, that runs the spacecraft and has a human-like personality. It converses with the astronauts about a wide variety of topics. Concerned that HAL may have made an error, the astronauts agree to turn the computer off. However, HAL reads their lips, and, in an act of self-preservation, turns off the life-support systems of the other crew members. In the Terminator movie franchise, which first appeared in movie theaters in 1984, an AI defense system perceives all humans as a security threat and creates fearsome robots with one mission: eradicate humanity.
Speculation about the potential dangers of AI is not limited to the realm of science fiction. Many highly visible technologists have predicted that AI systems will become smarter and smarter and will eventually take over the world. Tesla founder Elon Musk says that AI is humanity’s “biggest existential threat”3 and that it poses a “fundamental risk to the existence of civilization.”4 The late renowned physicist Stephen Hawking said, “It could spell the end of the human race.” Philosopher Nick Bostrom, who is the founding director of the Future of Humanity Institute, argues that AI poses the greatest threat humanity has ever encountered—greater than nuclear weapons.5
This kind of fear-inducing hype is an overstatement of the capabilities of AI. AI systems are never going to become intelligent enough to have the ability to exterminate us or turn us into pets. That said, there are many real and critical social issues caused by AI that will not be solved until we separate out and put aside this existential fear.
FACT AND FICTION
The AI systems that these technologists and science fiction authors are worried about all are examples of artificial general intelligence (AGI). AGI systems share in common with humans the ability to reason; to process visual, auditory, and other input; and to use it to adapt to their environments in a wide variety of settings. These systems are as knowledgeable and communicative as humans about a wide range of human events and topics.6 They’re also complete fiction.
Today’s AI systems are miracles of modern engineering. Each of today’s AI systems performs a single task that previously required human intelligence. If we compare these systems with the AGI systems of science fiction lore and with human beings, there are two striking differences: First, each of today’s AI systems can perform only one narrowly defined task.7 A system that learns to name the people in photographs cannot do anything else. It cannot distinguish between a dog and an elephant. It cannot answer questions, retrieve information, or have conversations. Second, today’s AI systems have little or no commonsense8 knowledge of the world and therefore cannot reason based on that knowledge. For example, a facial recognition system can identify people’s names but knows nothing about those particular people or about people in general. It does not know that people use eyes to see and ears to hear. It does not know that people eat food, sleep at night, and work at jobs. It cannot commit crimes or fall in love. Today’s AI systems are all narrow AI systems, a term coined in 2005 by futurist Ray Kurzweil to describe just those differences: machines that can perform only one specific task. Although the performance of narrow AI systems can make them seem intelligent, they are not.
In contrast, humans and fictional AGI systems can perform large numbers of dissimilar tasks. We not only recognize faces, but we also read the paper, cook dinner, tie our shoes, discuss current events, and perform many, many other tasks. We also reason based on our commonsense knowledge of the world. We apply common sense, learned experience, and contextual knowledge to a wide variety of tasks. For example, we use our knowledge of gravity when we take a glass out of the cupboard. We know that if we do not grasp it tightly enough, it will fall. This is not conscious knowledge derived from a definition of gravity or a description in a mathematical equation; it’s unconscious knowledge derived from our lived experience of how the world works. And we use that kind of knowledge to perform dozens of other tasks every day.
The big question is whether today’s narrow AI systems will ever evolve into AGI systems that can use commonsense reasoning to perform many different tasks. As I will explain, the answer is no. We do not have to worry about AGI systems taking over the world. And we probably never will.
TOASTERS DON’T HAVE GHOSTS
The title of Arthur Koestler’s 1967 book The Ghost in the Machine9 alludes to the long-standing philosophical debate about whether humans have a “ghost”—a mind, a consciousness, that cannot be seen or measured—in addition to their physical machines. Koestler believed that people are just their physical machines, that there is no separate mind, and that we will someday be able to explain, for example, emotions like love as the interaction of neurons. I cannot tell you the answer to the philosophical question, and I have no idea if we will ever be able to explain love. However, I can confidently declare my belief that we will never develop computer systems or robots with human-level, commonsense reasoning capabilities. Said another way, there will never be a ghost in the machine.
Even though we do not need to worry about AGI systems dominating humanity, as narrow AI technology becomes more and more widely deployed, it brings with it many new social issues. The race to perfect self-driving vehicles is well underway, but there are safety issues that we must address before we deploy them on our city streets and highways. Autonomous weapons and other narrow AI advances threaten public safety. We may see a significant impact of narrow AI technology on employment. Facial recognition technology is being used for surveillance and threatens our privacy. There are significant issues around fairness and discrimination against minorities. Furthermore, deepfakes, fake news, and hackers are influencing real-world elections. We will need to address all these social issues.
One of the keys to finding solutions to AI-related issues is to make sure we do not overcomplicate them by conflating narrow AI and AGI. For example, if AGI capabilities were imminent, we would need laws that govern human interaction with intelligent robots. Do robots have rights? Can they go to jail? Can they be held financially responsible for an accident? We would also need laws to ensure that the manufacturing process does not create robots that can take over the world.
Fortunately, narrow AI systems will only ever be able to make autonomous decisions regarding specific tasks, so we do not need general AGI laws. We do not have to worry about the legal rights of robots. They can and should have no more legal standing than toasters. Instead, we can focus on laws for specific uses of narrow AI, such as autonomous vehicles.
THE FUTURE IS ALWAYS A MIXED BAG
We see warnings about AI in the popular press every single day. In December 2019 alone, The New York Times featured headlines with grave cautions: “Artificial Intelligence Is Too Important to Leave to Google and Facebook Alone,” “Many Facial-Recognition Systems Are Biased, Says U.S. Study,” and “A.I. Is Making It Easier to Kill (You). Here’s How.” A recent study showed that 60 percent of the people in the UK fear AI.10
Historically, new technology has brought great benefits to society. However, the positive impacts are often accompanied by some negatives. The invention of the automobile brought us greater mobility, but also introduced car accidents. The invention of the internet brought us connectivity beyond any level imagined previously, while it also led to hackers and spam and facilitated child exploitation.
Although even narrow AI may lead to many societal changes, such as the way we work, it’s no different from any other major technological advance. The steam engine and mass production led Western society away from an agrarian lifestyle and into factories, which brought with it increased pollution and wage disparity but ultimately led to the middle class. Advances in transportation expanded the world from local communities into huge geographic regions of travel and trade. The internet expanded that world even further, changing how we do just about everything.
AI is just one more step forward. As with each of those other advances, AI can be dangerous when used for nefarious purposes or without proper regulation, but it’s a tool. Just like any kind of progress, although AI may involve some difficult societal and personal challenges in the short term, its overall effect on the world and on our lives will be largely positive.
2
FEARS WORTH HAVING
Many things can go wrong with narrow AI systems that can impact our safety. If we are not careful, we could end up with self-driving cars running over babies in strollers; out-of-control, bomb-carrying drones; missed cancer diagnoses; and nuclear plant meltdowns. However, these dangers can be prevented with the common sense those systems lack: with properly informed regulation and by ensuring that AI tasked with dangerous operations is fully tested before it is put into use.
AUTONOMOUS WEAPONS
The idea of applying even narrow AI to government-operated military weapons is a frightening thought for most people. Narrow AI–enabled weapons in the hands of terrorists is perhaps even scarier. The most terrifying scenario would be if AGI-based military systems were possible. AGI systems would bring in the potential for Terminator-like scenarios and other terrifying possibilities. Fortunately, AGI is not happening.
Unmanned aerial vehicles (UAVs) without AI have been used in warfare since the US began to deploy them after the 9/11 attacks. These UAVs include drones, which are controlled remotely by operators at consoles, similar to a video game. The weapons range in size from hobbyist quadcopters with an attached bomb to small aircraft with multiple missiles. The operator views the video produced by a camera on the drone, and when they see the target (which could range in size from large military installations to one individual terrorist), the operator initiates the attack. The actual attack occurs either by a sm...

Table of contents

  1. Cover
  2. Title
  3. Copyright
  4. Contents
  5. Foreword by Roger C. Schank
  6. Preface
  7. Chapter 1: The Social Impact of AI
  8. Chapter 2: Fears Worth Having
  9. Chapter 3: A Brief History of AI
  10. Chapter 4: Employment
  11. Chapter 5: Supervised Learning
  12. Chapter 6: Deception
  13. Chapter 7: Unsupervised Learning
  14. Chapter 8: What Drives Self-Driving Cars
  15. Chapter 9: Reinforcement Learning
  16. Chapter 10: Privacy
  17. Chapter 11: Neural Networks and Deep Learning
  18. Chapter 12: Natural Language Processing
  19. Chapter 13: Thinking and Reasoning
  20. Chapter 14: Discrimination
  21. Chapter 15: Artificial General Intelligence
  22. Chapter 16: AI Will Not Take Over the World—Unless We Let It
  23. Acknowledgments
  24. Endnotes
  25. Index
  26. About the Author