How Intelligence Happens
eBook - ePub

How Intelligence Happens

John Duncan

Share book
  1. 256 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

How Intelligence Happens

John Duncan

Book details
Book preview
Table of contents
Citations

About This Book

A lively journey through the brain's inner workings from "one of the world's leading cognitive neuroscientists" ( The Wall Street Journal ). Human intelligence builds sprawling cities, vast cornfields, and complex microchips. It takes us from the atom to the limits of the universe. How does the biological brain, a collection of billions of cells, enable us to do things no other species can do? In this book, neuroscientist John Duncan offers an adventure story—the story of the hunt for basic principles of human intelligence, behavior, and thought. Using results drawn from classical studies of intelligence testing; from attempts to build computers that think; from studies of how minds change after brain damage; from modern discoveries of brain imaging; and from groundbreaking recent research, he synthesizes often difficult-to-understand information into clear, fascinating prose about how brains work. Moving from the foundations of psychology, artificial intelligence, and neuroscience to the most current scientific thinking, How Intelligence Happens is "a timely, original, and highly readable contribution to our understanding" (Nancy Kanwisher, MIT) from a winner of the Heineken Prize for Cognitive Science

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is How Intelligence Happens an online PDF/ePUB?
Yes, you can access How Intelligence Happens by John Duncan in PDF and/or ePUB format, as well as other popular books in Psychologie & Science cognitive. We have over one million books available in our catalogue for you to explore.

Information

Year
2010
ISBN
9780300168730
Chapter 1 The Machine
Thirty years ago, I took a train to Heathrow. I was meeting a good friend, one of many I had made in two years of postdoctoral work at the University of Oregon. This was his first visit to Britain. On the ride back from the airport, looking out over the manicured hedges and fields of the English countryside, he said with wonder, “Oh man, what a conquered country.” Very occasionally, flying over Siberia or Greenland, I have looked down on a country that seemed largely unconquered. Otherwise, though, our environment is shaped and filled by the products of the human mind.
For example, looking up from my work here I see 
 a desk, a computer, sheets of paper 
 a window, with a house ahead, a gravel drive to the left, a road beyond with passing motor vehicles 
 beyond that, more houses, television antennas, electricity lines. To the right are gardens, but little in those gardens grew there on its own 
 these plants grew because a person wanted them, dug the earth to plant them, fed them fertilizer, pulled up weeds. In among the plants are fences, sheds, washing lines 
 steam rises from boiler outlets 
 planes pass overhead in the sky. It would be nice to believe that eager readers have taken my book with them to the ice fields of Antarctica, but more likely, the environment that you will see, raising your eyes from the page, will be just as conquered as mine.
No wonder we find our own minds so fascinating. They give us our human world, with its greatest achievements: medicine, art, food production, shelter, and warmth, all products of the human mind and the power it gives us to transform our existence. They give us also many of the greatest hazards we and our planet face: climate change, the destructions of war, enormous imbalances in the distribution of food and other goods, pollution and ecosystem destruction, pandemics brought on by our own behavior—all products of human choice and action, all avoidable if our minds did not function as they do.
Every organism has its own ecological niche and the special features that have allowed it to survive and flourish. Just as the cheetah runs and the caterpillar sits motionless along the blade of a leaf, so we have our unique intelligence: the intelligence that created the desk, the window, the passing cars and planes. We love to watch this intelligence at work, as a child first fits together the pieces of a jigsaw or recites her first nursery rhyme, as a student stares intently at the calculus teacher and suddenly, from nowhere, there is a fizz of understanding and shared delight. We admire human intelligence in architecture, in well-oiled machinery, in an argument perfectly constructed. This is what we are, and this is why so much of our world is now so firmly in our hands.
But how? Surely, the nature of human intelligence is among the most challenging, the most fascinating, and—both for ourselves and for our planet—the most essentially important of questions. How should we understand the human mind and the human behavior that so powerfully shapes our world?
One approach to understanding human minds is thoroughly familiar. It is how we grew up, how we operate many times each day, how we manage our affairs. Essentially, it is the explanation of human choice and action through reason. We see ourselves as rational agents. Our choices have reasons, and when we describe those reasons, we explain the things that we chose to do.
This perspective is evident in all that we do. History’s explanations are accounts of what people wanted, what they knew or believed, what they intended to achieve. As the Russians retreated before Napoleon, they burned crops because they intended the French army to starve. John F. Kennedy held back from a strike on Cuba because he believed that such a strike could force the Russian leadership into nuclear war. The concerns of the law are with choices, reasons, intentions. Only an intentional act is a crime; a murder is a murder not because the victim is dead but because this death was intentionally brought about. We give our own reasons to explain our own behavior, and we use the reasons of others to predict or influence what they do. To ensure that four people will assemble on a tennis court to play a doubles match, we concern ourselves with their knowledge and their desires. We make sure that they wish to play and that they know the time and place. In education, we fill children’s minds with the knowledge that they will need to guide rational thought, from the steps of a geometric proof to a balanced appreciation of the rights of others. In politics, we act to change the reasons of others—we debate, negotiate, persuade, argue, bargain, or bribe.
This rational perspective is certainly natural, and in our daily lives it is very effective. One person is out shopping for clothes; another is at work; one is preparing dinner; a fourth is landing from a trip to South America. Yet with one small sentence typed into an email program, it can be ensured that all four individuals will converge at the same place at the same time, carrying racquets and ready for tennis. We are so used to it that we forget to think how remarkable it is that four animals can coordinate their activity in this way.
When we explain by reason, it is perhaps apt to say that we think of ourselves as subjects rather than objects. From this perspective, we are free agents, the causes and not the effects in the world we inhabit. We evaluate options, choose as we wish, and are responsible and accountable for those choices. If we are asked why we did something, the explanations we give will refer to the reasons we had. Free agents do as they wish; they do as their reasons dictate.
Hidden behind this, though, is a different perspective. Sometimes, we explain ourselves in a different way. We acknowledge that we forgot to stop on the way home to pick up milk. We say that we drove foolishly because we were angry with the children. Many years ago, I conducted research on absent-minded slips and how they happen. My favorite was, “I filled the washing machine with porridge.” (Even more pleasing was the woman who, responding to this item on a questionnaire, said that she did this sort of thing, not “never,” not “rarely,” not “often,” but “nearly all the time.” Her clothes, however, appeared normal.) In these cases, suddenly we do not explain behavior as a free choice, as the intention to achieve something by a certain means. Instead, we are saying something about the choice process itself. We are acknowledging that reasoning has its limits—that sometimes it goes well, but sometimes it does not.
For the science of mind and brain, this second perspective is central. From this perspective, we are biological machines with biological limits. Indeed, we think and reason, we form wishes, beliefs, plans, and intentions. But these reasons are not created in the abstract—they are created by the machine. In explaining human behavior, understanding reasons is only half the story. The other half is understanding the machine by which reasons are made. This is the half that this book is about.
From this perspective, what we want to know is how the machine works. What are reason and thought, and how do they work in the human mind and brain? What is human intelligence: How does it extend the intelligence of other animals? How does it relate to the intelligence of thinking computers? How can it arise from billions of tiny nerve cells communicating by brief electrical impulses?
In some ways science comes naturally. We find it natural to apply a perspective of objective inquiry to the understanding of molecules, planets, forces, diseases—indeed, almost anything at all. In my view, this comes easily because science is simply a more systematic version of our natural, everyday fascination with knowledge—with a fundamental understanding of how our world works. It is sometimes suggested that babies are born as natural scientists. Crawling across a lawn, the baby touches a thistle. He pulls back 
 reaches out carefully to test again 
 pulls back again 
 tests again. He is born to observe and to fill his mind with useful knowledge, with the knowledge he will use to navigate through life. As Francis Bacon put it, “Ipsa scientia potestas est”—in itself, knowledge is power. In science, our natural fascination with knowledge is simply put into organized, institutionalized form.
Natural though we find it in most cases, the objective analysis of science comes much less naturally when we apply it to our own minds. There is an unsettling tension between our usual perspective on ourselves—as rational agents looking from the inside out—and the opposite perspective of science, observing the machine from the outside in.
As soon as we look at ourselves from outside in, clear cracks appear in the “free agent” impression. Of course, it is quite obvious that our behavior is not at all free and unconstrained. Instead, like any other entity, we have our own properties, potentialities, and limits. Our behavior is explained, not only by the reasons we had, but by the limited reasoning machinery we have at our disposal.
Perspectives become more compelling with use, and after more than thirty years as an experimental psychologist, looking at myself and those around me as object rather than subject, it is now quite hard for me to recapture the sense of how I used to think. Quite often, though, the opposing perspectives clash. For example, a few years ago I was attending a conference in New York. In a bar I was introduced to a neuroscientist who worked in a distantly related field, and soon we were in a debate. Putting some key fact onto the table, she said, “I’m absolutely certain that this is true.” “Aha,” I replied, “but what’s the correlation between your confidence and your accuracy?” The question is reasonable but impolite, and slightly too late I realized that I had stepped over the brink between psychologist and normal person. In everyday life we do not like it when our own reason machines are analyzed and questioned; the outside in perspective is unsettling (at best). But for the science of the reason machine, just such questions are the daily fare.
I have always been fond of this anti-psychology joke. Two psychologists pass in the street. One says, “Hello!” The other one continues on his way, thinking, “I wonder why he said that?”
Let me begin with a small example of the limits of the free agent, an example of irresistibly doing something against our will. In psychology it is called the Stroop effect, after the American psychologist John Ridley Stroop, who first described it in 1935.1 A person is asked to scan as quickly as possible down a sheet of paper, calling out the colors of everything he sees. (To avoid repetitions of “he or she,” I shall make my imaginary person a man in all these examples. Also, in line with the fact that this man is now the subject matter of investigation, I shall follow the convention of experimental psychology and call him “the subject.” To be consistent with what I just said, I might have called him “the object”—but even experimental psychology has not generally gone that far. Usually, to make the results more reliable, psychological experiments gather data over many repetitions of the same task; we call these repetitions “trials.”) The subject knows he will be timed. His task is to finish each trial as quickly as possible. The experimenter sits with a stopwatch to see how fast it can be done.
In the first version, the items on the page are rows of Xs written in different-colored inks. The subject goes down the list from top to bottom, calling out the colors as fast as possible; when he gets to the bottom, the result is written down. In the second version, there is one small change. Now the items on the page are not just rows of colored Xs; they are colored words that spell out color names. Specifically, the words spelled are different from the colors of the inks—so that, for example, the subject might see blue written in orange or green written in purple. For the subject this should not matter. He is not asked to read the words; in fact, he should ignore them. As before, he just has to go down the list naming all the ink colors as fast as possible. Suddenly, though, this is much harder to do. Every time he tries to name an ink color, the word he is looking at also pops into mind. He slows down, makes false starts, may even read an occasional word out loud. Free will? This experiment shows that we cannot even choose to avoid the simple act of reading.
Experimental psychology commonly focuses on such limits or constraints on mental ability. In part, this is because limits are often helpful in understanding how something works. In part, it is because limits are constantly brought to our attention in practical situations, and bypassing them can be an important practical concern. Though we are rarely asked to name colors as quickly as possible, especially not when these colors belong to conflicting words, we are often required to do several things at once. In the early 1950s, psychologists became interested in our limited ability to divide our attention. This interest had its origins during World War II, when psychologists had been employed to address practical military problems of this sort: air-traffic controllers dealing with simultaneous radio calls or fighter pilots handling multiple cockpit controls. How much information could a person actually process at one time? How did this depend on the way that the information was presented or what sort of information it was? Again, the results of these experiments show severe limits on our ability to do what we want.
Through the 1950s, our limited ability to process simultaneous events was investigated by such psychologists as Colin Cherry, Christopher Poulton, and Donald Broadbent.2 In a typical experiment, the subject heard two simultaneous speech messages, one arriving through a headphone on the left ear, the other through a headphone on the right. To demand careful attention to one message, perhaps the one on the right, the subject was asked to repeat the message back continuously as it came in. After a minute or two of this, he was stopped and asked questions about the other message. With his attention focused on one message, how much did he manage to pick up from the other?
By and large, the answer from these experiments was: astonishingly little. Usually, the subject would not know if the ignored message had concerned air transport or classical literature. In fact, he would not even know that, halfway through, the message had changed from English to German or to speech played backwards. To allow the subject to pick up anything at all from the second message, the most extreme changes had to be made. For example, usually he would notice if, for significant lengths of time, the spoken message was changed to a continuous tone. Similar results were found if, while repeating back one message, the subject was also asked to detect occasional target words—perhaps color words—in either message. For the message he was repeating, the targets would usually be detected, but for the other message, most targets were missed.3
In 1960 an intriguing variation on this experiment was designed by a young University of Oxford student named Anne Treisman. As usual, the subject was set to repeat back one message—perhaps the one on the right. As usual, he knew little of the other, in this case the one on the left. At an unpredictable moment, the two messages switched sides. The message that had previously been arriving on the right now switched to the left, whereas the message that had previously been ignored on the left now switched to the right. This was all irrelevant to the subject, whose instruction was to keep on repeating whatever message came from the right, no matter what it was. Nevertheless, in a good proportion of trials, the subject stumbled as the switch occurred and even for a word or two continued to follow the previous message, though now it was arriving on the left. The experiment raises intriguing questions. If a person cannot even tell whether the left ear has normal English or reversed German speech, how can he (at some level) “know” when the left ear continues the sense of the message he has just been following—knowledge sufficient to cause him to break the rule he has been given and switch to repeating things from the wrong ear? Again, such experiments point up severe limits on our mental activity—limits on our ability to do or to achieve things we firmly wished and intended to achieve.4
Faced with examples like these, our natural human reaction is to shift somewhat the goalposts of free will. Obviously, we reason, our minds/brains have some basic, low-level limitations. Of course, we have always known that we cannot do fifteen things at once. Really, though, this is not what we mean by free will and rationality. At a higher level, we remain rational agents who freely choose the course of action that is best.
So let’s move up to some of the limits on reason itself. Again, experimental psychology can provide textbooks full of examples, but I will just give one that I especially like. It comes from one of the great wise men of British psychology, Peter Wason, and a series of experiments carried out in the early 1960s.5
In these experiments, the subject simply has to discover the rule that is used to generate sets of three numbers. He is given the first set of three: “2, 4, 6.” He is told that he must discover the rule by generating additional sets of three numbers. Each time he generates a set, the experimenter will tell him whether it obeys the rule. Then, when the subject is sure that he knows the rule, and not before, he should stop the experiment and announce what the rule is. He can take as much time as he likes.
The experiment typically unfolds this way. The subject generates a set like “4, 6, 8.” He is told that it satisfies the rule. He tries “10, 12, 14” and is told again that it obeys the rule. If particularly cautious, the subject may try “1, 3, 5” and even “1022, 1024, 1026,” both of which are equally successful. At this point he announces that he is terminating the experiment and that the rule is “three numbers increasing in 2s.”
He is told that this is not the rule and is asked to continue.
At this point things begin to drift out of hand. The experiment can continue for half an hour or more, as the subject traps himself in a spiral of increasingly elaborate hypotheses. A common next step is to imagine that the middle number must be the average of the first and last. The subject tries “3, 6, 9” and “13, 27, 41.” Or the subject may begin to think that the first number doesn’t matter; only the second increase by 2 is important. He generates “1, 4, 6” and “27, 32, 34.” When he hears again that his choices match the experimenter’s rule, it seems impossible that the baroque rule he has thought of could be wrong, so again he stops the experiment and announces it. Again he is told that he is wrong, but surely such a specific rule that generated correct examples must have been near the truth? The subject now constructs an even more complex rule, attempting to accommodate the structure of the previous rule while adding some new, apparently arbitrary twist. Arbitrary though it is, the new examples it generates usually obeys the rule, and the web gains a new thread.
In fact, the rule of the experiment is simply “three numbers in increasing order.” How on earth can a rule that is so simple be so difficult to discover, even by subjects recruited from a university science department? As Wason explained, the answer is a bias to confirmation rather than disconfirmation. Under the influence of this bias, we are blind to most possible explanations for the data at hand. How rational are we when we ignore what is glaringly obvious?
Confirmation bias works like this. When we start with “2, 4, 6,” the “increase in 2s” rule is our obvious first thought. (It is an interesting further topic why this particular rule is the “obvious” one.) With this hypothesis in mind, confirmation bias leads to generation of further candidates that satisfy the hypothesis. When these candidates are also correct, it is impossible to believe that the hypothesis is wrong—the more so, the more complex it becomes. The problem is that we have given essentially no thought to all the other, equally sensible hypotheses that were not our first, preferred candidate (including the correct one). With the idea of other hypotheses in mind, perhaps we should have tried the strategy that actually works for problems like this—the strategy, in fact, that is widely lauded as the method of choice for science in general. This is the attempt to disconfirm, by generating examples that do not satisfy the original hypothesis. This strategy might have led to such triplets as “1, 2, 3” and then “2, 4, 17”—found to satisfy the experimenter’s rule, disconfirming the original “increase in 2s” hypothesis, and leading directly away from the spiral of apparent confirmation for increasingly complex, increasingly irrelevant ideas.
Once seriously examined, the idea of ourselves as optimal reasoning agents seems absurd. Its appeal, nevertheles...

Table of contents