What to Think About Machines That Think
eBook - ePub

What to Think About Machines That Think

John Brockman

Share book
  1. 576 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

What to Think About Machines That Think

John Brockman

Book details
Book preview
Table of contents
Citations

About This Book

Weighing in from the cutting-edge frontiers of science, today's most forward-thinking minds explore the rise of "machines that think."

Stephen Hawking recently made headlines by noting, "The development of full artificial intelligence could spell the end of the human race." Others, conversely, have trumpeted a new age of "superintelligence" in which smart devices will exponentially extend human capacities. No longer just a matter of science-fiction fantasy ( 2001, Blade Runner, The Terminator, Her, etc.), it is time to seriously consider the reality of intelligent technology, many forms of which are already being integrated into our daily lives. In that spirit, John Brockman, publisher of Edge. org ("the world's smartest website" – The Guardian ), asked the world's most influential scientists, philosophers, and artists one of today's most consequential questions: What do you think about machines that think?

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is What to Think About Machines That Think an online PDF/ePUB?
Yes, you can access What to Think About Machines That Think by John Brockman in PDF and/or ePUB format, as well as other popular books in Psychology & Cognitive Science. We have over one million books available in our catalogue for you to explore.

Information

Year
2015
ISBN
9780062425669
SELF-AWARE AI? NOT IN 1,000 YEARS!
ROLF DOBELLI
Founder, Zurich Minds; journalist; author, The Art of Thinking Clearly
The widespread fear that AI will endanger humanity and take over the world is irrational. Here’s why.
Conceptually, autonomous or artificial intelligence systems can develop in two ways: either as an extension of human thinking or as radically new thinking. Call the first “Humanoid Thinking,” or Humanoid AI, and the second “Alien Thinking,” or Alien AI.
Almost all AI today is Humanoid Thinking. We use AI to solve problems too difficult, time-consuming, or boring for our limited brains to process: electrical-grid balancing, recommendation engines, self-driving cars, face recognition, trading algorithms, and the like. These artificial agents work in narrow domains with clear goals their human creators specify. Such AI aims to accomplish human objectives—often better, with fewer cognitive errors, distractions, outbursts of bad temper, or processing limitations. In a couple of decades, AI agents might serve as virtual insurance sellers, doctors, psychotherapists, and maybe even virtual spouses and children.
But such AI agents will be our slaves, with no self-concept of their own. They’ll happily perform the functions we set them up to do. If screwups happen, they’ll be our screwups, due to software bugs or overreliance on these agents (Dan Dennett’s point). Yes, Humanoid AIs might surprise us once in a while with novel solutions to specific optimization problems. But in most cases novel solutions are the last thing we want from AI (creativity in nuclear-missile navigation, anyone?). That said, Humanoid AI solutions will always fit a narrow domain. They’ll be understandable, either because we understand what they achieve or because we understand their inner workings. Sometimes the code will become too enormous and fumbled for one person to understand, because it’s continually patched. In these cases, we can turn it off and program a more elegant version. Humanoid AI will bring us closer to the age-old aspiration of having robots do most of the work while humans are free to be creative—or amused to death.
Alien Thinking is radically different. Alien Thinking could conceivably become a danger to Humanoid Thinking; it could take over the planet, outsmart us, outrun us, enslave us—and we might not even recognize the onslaught. What sort of thinking will Alien Thinking be? By definition, we can’t tell. It will encompass functionality we cannot remotely understand. Will it be conscious? Most likely, but it needn’t be. Will it experience emotion? Will it write bestselling novels? If so, bestselling to us or bestselling to it and its spawn? Will cognitive errors mar its thinking? Will it be social? Will it have a Theory of Mind? If so, will it make jokes, will it gossip, will it worry about its reputation, will it rally around a flag? Will it create its own version of AI (AI-AI)? We can’t say.
All we can say is that humans cannot construct truly Alien Thinking. Whatever we create will reflect our goals and values, so it won’t stray far from human thinking. You’d need real evolution, not just evolutionary algorithms, for self-aware Alien Thinking to arise. You’d need an evolutionary path radically different from the one that led to human intelligence and Humanoid AI.
So, how do you get real evolution to kick in? Replicators, variation, and selection. Once these three components are in place, evolution arises inevitably. How likely is it that Alien Thinking will evolve? Here’s a back-of-the-envelope calculation:
First, consider what getting from magnificently complex eukaryotic cells to human-level thinking involved. Achieving human thought required a large part of the Earth’s biomass (roughly 500 billion tons of eukaryotically bound carbon) during approximately 2 billion years. That’s a lot of evolutionary work! True, human-level thinking might have happened in half the time. With a lot of luck, even in 10 percent of the time, but it’s unlikely to have happened any faster. You don’t only need massive amounts of time for evolution to generate complex behavior, you also need a petri dish the size of Earth’s surface to sustain this level of experimentation.
Assume that Alien Thinking will be silicon-based, as all current AI is. A eukaryotic cell is vastly more complex than, say, Intel’s latest i7 CPU chip—both in hardware and software. Further assume that you could shrink that CPU chip to the size of a eukaryote. Leave aside the quantum effects that would stop the transistors from working reliably. Leave aside the question of the energy source. You’d have to cover the globe with 1030 microscopic CPUs and let them communicate and fight for 2 billion years for true thought to emerge.
Yes, processing speed is faster in CPUs than in biological cells, because electrons are easier to shuttle around than atoms. But eukaryotes work massively parallel, whereas Intel’s i7 works only four times parallel (four cores). Eventually, at least to dominate the world, these electrons would need to move atoms to store their software and data in more and more physical places. This would slow their evolution dramatically. It’s hard to say if, overall, silicon evolution will be faster than biological. We don’t know enough about it. I don’t see why this sort of evolution would be more than two or three orders of magnitude faster than biological evolution (if at all)—which would bring the emergence of self-aware Alien AI down to roughly a million years.
What if Humanoid AI becomes so smart it could create Alien AI from the top down? That’s where Leslie Orgel’s Second Rule kicks in: “Evolution is smarter than you are.” It’s smarter than human thinking. It’s even smarter than Humanoid Thinking. And it’s much slower than you think.
Thus, the danger of AI is not inherent to AI but rests on our overreliance on it. Artificial thinking won’t evolve to self-awareness in our lifetime. In fact, it won’t happen in 1,000 years.
I might be wrong, of course. After all, this back-of-the-envelope calculation applies legacy human thinking to Alien AI—which by definition we won’t understand. But that’s all we can do at this stage.
Toward the end of the 1930s, Samuel Beckett wrote in a diary, “We feel with terrible resignation that reason is not a superhuman gift . . . that reason evolved into what it is, but that it also, however, could have evolved differently.” Replace “reason” with “AI” and you have my argument.
MACHINES DON’T THINK, BUT NEITHER DO PEOPLE
CESAR HIDALGO
Associate professor, MIT Media Lab; author, Why Information Grows: The Evolution of Order, from Atoms to Economies
Machines that think? That’s as fallacious as people who think! Thinking involves processing information, begetting new physical order from incoming streams of physical order. Thinking is a precious ability, which unfortunately is not the privilege of single units such as machines or people but a property of the systems in which these units come to “life.”
Of course I’m being provocative here, since at the individual level we do process information. We do think—sometimes—or at least we feel like we do. But “our” ability to think is not entirely “ours”—it’s borrowed, since the hardware and software we use to think weren’t begot by us. You and I did not evolve the genes that helped organize our brains or the language we use to structure our thoughts. Our ability to think is dependent on events that happened prior to our mundane existence: the past chapters of biological and cultural evolution. So we can only understand our ability to think, and the ability of machines to mimic thought, by considering how the ability of a unit to process information relates to its context.
Think of a human born in the dark solitude of empty space. She’d have nothing to think about. The same would be true of an isolated and inputless computing machine. In this context, we can call our borrowed ability to process information “little” thinking—since it’s a context-dependent ability that happens at the individual level. “Large” thinking, by contrast, is the ability to process information embodied in systems, where units like machines or us are mere pawns.
Separating the little thinking of humans from the larger thinking of systems (which involves the process that begets the hardware and software that allow units to “little think”) helps us understand the role of thinking machines in this larger context. Our ability to think isn’t only borrowed; it also hinges on the use and abuse of mediated interactions. For human/machine systems to think, humans need to eat and regurgitate one another’s mental vomit, which sometimes takes the form of words. But since words vanish in the wind, our species’ enormous ability to think hinges on more sophisticated techniques to communicate and preserve the information we generate: our ability to encode information in matter.
For 100,000 years, our species has been busy transforming our planet into a giant tape player. The planet Earth is the medium wherein we print our ideas: sometimes in symbolic form, such as text and paintings, but, more important, in objects—like hair dryers, vacuum cleaners, buildings, and cars—built from the mineral loins of planet Earth. Our society has a great collective ability to process information because our communication involves more than words: It involves the creation of objects, which transmit not something as flimsy as an idea but something as concrete as know-how and the uses of knowledge. Objects augment us; they allow us to do things without knowing how. We all get to enjoy the teeth-preserving powers of toothpaste without knowing how to synthesize sodium fluoride, or the benefits of long-distance travel without knowing how to build a plane. By the same token, we all enjoy the benefits of sending texts throughout the world in seconds through social media or of performing complex mathematical operations by pressing a few keys on a laptop computer.
But our ability to create the trinkets augmenting us has also evolved, of course, as a result of our collective willingness to eat one another’s mental vomit. This evolution is the one that brings us now to the point where we have “media” that are beginning to rival our ability to process information, or “little think.”
For most of our history, our trinkets were static objects. Even our tools were solidified chunks of order, such as stone axes, knives, and knitting needles. A few centuries ago, we developed the ability to outsource muscle and motion to machines, causing one of the greatest economic expansions in history. Now we’ve evolved our collective ability to process information by creating objects endowed with the ability to beget and recombine physical order. These are machines that can process information—engines that produce numbers, like the engines Charles Babbage dreamed about.
So we’ve evolved our ability to think collectively by first gaining dominion over matter, then over energy, and now over physical order, or information. Yet this shouldn’t fool us into believing that we think or that machines do. The large evolution of human thought requires mediated interactions, and the future of thinking machines will also happen at the interface where humans connect with humans through objects.
As we speak, nerds in the best universities of the world are mapping out the brain, building robotic limbs, and developing primitive versions of technologies that will open up the future when your great-grandchild will get high by plugging his brain directly into the Web. The augmentation these kids will get is unimaginable to us—and so bizarre by our modern ethical standards that we’re not even in a position to properly judge it; it would be like a sixteenth-century Puritan judging present-day San Francisco. Yet in the grand scheme of the universe, these new human/machine networks will be nothing other than the next natural step in the evolution of our species’ ability to beget information. Together, humans and our extensions—machines—will continue to evolve networks that are enslaved to the universe’s main glorious purpose: the creation of pockets where information does not dwindle but grows.
TANGLED UP IN THE QUESTION
JAMES J. O’DONNELL
Classical scholar; University Professor, Georgetown University; author, Augustine, The Ruin of the Roman Empire, Pagans
Thinking is a word we apply with no discipline whatsoever to a huge variety of reported behaviors. “I think I’ll go to the store,” and “I think it’s raining,” and “I think, therefore I am,” and “I think the Yankees will win the World Series,” and “I think I’m Napoleon,” and “I think he said he would be here, but I’m not sure”—all use the same word to mean entirely different things. Which of them might a machine do someday? I think that’s an important question.
Could a machine get confused? Experience cognitive dissonance? Dream? Wonder? Forget the name of that guy over there and at the same time know that it really knows the answer and if it just thinks about something else for a while, it might remember? Lose track of time? Decide to get a puppy? Have low self-esteem? Have suicidal thoughts? Get bored? Worry? Pray? I think not.
Can artificial mechanisms be constructed to play the part in gathering information and making decisions that human beings now play? Sure, they already do. The ones controlling the fuel injection in my car are a lot smarter than I am. I think I’d do a lousy job of that.
Could we create machines that go further and act without human supervision in ways that prove good or bad for human beings? I guess so. I think I’ll love them, except when they do things that make me mad—then they’re really being like people. I suppose they could run amok and create mass havoc, but I have my doubts. (Of course, if they do, nobody will care what I think.)
But nobody would ever ask a machine what it thinks about machines that think. That’s a question that makes sense only if we care about the thinker as an autonomous and interesting being, like ourselves. If somebody ever does ask a machine this question, it won’t be a machine anymore. I think I’m not going to worry about that for a while. You may think I’m in denial.
When we get tangled up in this question, we need to ask ourselves just what it is we’re really thinking about.
MISTAKING PERFORMANCE FOR COMPETENCE
RODNEY A. BROOKS
Panasonic Professor of Robotics, emeritus, MIT; founder, chair, and CTO, Rethink Robotics; author, Flesh and Machines
Think and intelligence are both what Marvin Minsky has called suitcase words—words into which we pack many meanings so we can talk about complex issues in shorthand. When we look inside these words, we find many different aspects, mechanisms, and levels of understanding. This makes answering the perennial questions of “Can machines think?” or “When will machines reach human-level intelligence?” difficult. The suitcase words are used to cover both specific performance demonstrations by machines and the more general competence that humans might have. We generalize from performance to competence and grossly overestimate the capabilities of machin...

Table of contents