Blind Spot
eBook - ePub

Blind Spot

Dr. Gordon Rugg

Share book
  1. 304 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Blind Spot

Dr. Gordon Rugg

Book details
Book preview
Table of contents
Citations

About This Book

The Voynich Manuscript has been considered to be the world's most mysterious book. Filled with strange illustrations and an unknown language, it challenged the world's top code-crackers for nearly a century.

But in just four-and-a-half months, Dr. Gordon Rugg, a renowned researcher, found evidence (which had been there all along) that the book could be a giant, glittering hoax.

In Blind Spot: Why We Fail to See the Solution Right in Front of Us, Dr. Rugg shares his story and shows how his toolkit of problem-solving techniques—such as his Verifier Method—can save the day, particularly in those times when the experts on your team have all the data in front of them but are still unaccountably at an impasse.

In the tradition of Malcolm Gladwell and Dan Ariely, Dr. Rugg, a rising star in computer science, challenges us to re-examine the way we think, and provides new tools to solve problems and crack codes in our own lives.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Blind Spot an online PDF/ePUB?
Yes, you can access Blind Spot by Dr. Gordon Rugg in PDF and/or ePUB format, as well as other popular books in Psychology & Social Psychology. We have over one million books available in our catalogue for you to explore.

Information

Publisher
HarperOne
Year
2013
ISBN
9780062134738
1
The Expert Mind
IN THE SIXTEENTH CENTURY, EXCAVATORS DISCOVERED A glass vase in some ruins outside Rome. Roman glassworkers were good by anyone’s standards, but this piece of glassware is exceptional. It’s made of two layers of glass, a dark blue inner layer with a white outer layer. The surface is carved into rich, intricate cameos depicting humans and gods. It’s known now as the Portland Vase, after the British aristocrat who bought it. It was made around a couple of thousand years ago. The glass-cutting style used to make the vase is so complicated, so intricate, that archaeologists think it probably took the original craftsman two years to make.
One way of measuring the skill of the vase’s maker is to look at how long it was before anyone was able to produce another piece of glassware like it: it took almost two thousand years. Producing a replica became a challenge for the best glassmakers of the Industrial Revolution. The vase became such an iconic challenge that Josiah Wedgwood developed a whole line of Wedgwood pottery inspired by it. The Great Exhibition of 1851 was a high-profile showcase for the crowning achievements of the era—the most sophisticated technology in the world. It didn’t contain a replica of the Portland Vase, because nobody had been able to produce one. The first passable-quality replica was made in the 1870s.
It was that difficult. That’s one of the first lessons about experts.

LESSON 1
Experts can do things that the rest of us aren’t able to do.

We’re surrounded by examples of the complexity of expert knowledge in everyday life, to the point where we don’t even notice them most of the time. Most offices have computer support staff; they are experts at doing things most of us don’t begin to understand. On the way to the office, many of us travel by train. We wouldn’t have a clue how to drive a train, maintain the tracks, or schedule service. If you drive to work, you’re using a vehicle that requires expertise to service and repair; it’s not advisable to try diagnosing and replacing a faulty fuel injection system using general knowledge and common sense.
Experts are good, often impressively good, and they can do things that nonexperts can’t. However, that doesn’t mean they’re immune to making mistakes. If we understood how experts think, we could probably help them avoid those mistakes. But it turns out that’s easier said than done. In this chapter, I’ll try to explain why it’s so hard to know what experts know.
The Real Lightsabers and the Unreal View of War
One area where there’s been a long-standing tension between true expertise and mistaken expert opinion is the world of war. Ever wondered what inspired the lightsabers in the Star Wars movies? George Lucas and his team were probably inspired by the name of the nineteenth-century light—as in, not so weighty—saber. This isn’t a steampunk creation bristling with brass work, dials, and levers; it’s simply the alternative to the heavy saber.
Learning how to fight with a full-weight saber was dangerous, so fencing instructors, treading an uneasy line between realism and injury, began using lighter sabers. They deliberately minimized injury at the expense of realism, but the rank-and-file cavalry continued to use full-weight heavy sabers in battle. That choice made it more difficult for the instructors to teach cavalrymen how to fight an opponent who was using more brute force than skill. And the soldiers, thinking their instructors were out of touch after years of working with light sabers, tended to be skeptical about how much real expertise the fencing masters had. The soldiers’ experience on the battlefield, documented in numerous diaries of ordinary troopers, suggested that they were often right to be skeptical. So the counterpoint to the last conclusion about experts is cautionary:

LESSON 2
Experts’ skills don’t always correspond to reality.

For centuries, it was taken for granted by most people that expertise had some features that distinguished it. It was generally assumed that experts were better at pure logic than lesser mortals, and that they used this logic combined with their higher intelligence to solve problems. There was also a strong element of snobbishness, with an implicit assumption that “real” expertise was the province of white-collar professionals, with manual skills being excluded from the club of expertise. The chess game was often viewed as an archetypal demonstration of expertise in action: a good chess player can easily beat a weak chess player, so there’s clearly some real expertise involved, and chess is an abstract, cerebral skill, unlike sordidly manual skills, such as glassmaking or hacking someone to death on a battlefield.
When the French chess master François-AndrĂ© Danican Philidor played three simultaneous blindfold games of chess in 1783, this was hailed as one of the highest achievements of the human intellect. Expertise distinguished humans from lesser creation, as well as distinguishing upper-class intellectuals from the lower orders. It was a comforting view, which wasn’t seriously challenged until the 1960s, when everything changed.
Crumbling Walls
One of the first challenges to this cozy belief came from research into chess masters. Beginning in the 1940s, a Dutch psychologist named Adriaan de Groot and his colleagues began investigating how chess masters actually operated, as opposed to how everyone assumed they operated. This led to an important finding.

LESSON 3
Experts are not significantly more intelligent than comparable nonexperts.

De Groot and his colleagues found, to their surprise, that chess masters weren’t significantly more intelligent than ordinary chess players. Nor did they have a significantly better memory for the positions of chess pieces placed randomly on a chessboard. Their expertise turned out to be coming from a completely different source: memory about chess games.
What set chess masters apart was that they could remember enormous numbers of gambits, strategies, tactics, placements of combinations of pieces, previous examples, and on and on, from games they had played, games they had watched, or famous games in history that they had studied. Although they weren’t good at remembering random arrangements of pieces on a chessboard, they were very good at remembering nonrandom arrangements, such as a particular configuration of a king and several supporting pieces. The number of such memories was staggering: a chess master typically knew tens of thousands of pieces of information about chess. Later, when other psychologists began studying the way experts thought, a remarkably similar picture emerged from research into other areas of expertise: what defined an expert always turned out to be the possession of tens of thousands of pieces of information, which typically took about seven to ten years to acquire.

LESSON 4
Experts retain huge numbers of facts about their areas of expertise.

These findings were brutally consistent, even for venerated prodigies, such as Mozart, who began playing music at age three and composing by age five. But if you looked at how long it took between his first compositions and the first of his compositions that stand up to comparison with expert composers, the period is about seven to ten years. This discovery gave researchers pause: maybe expertise was simply a matter of time, experience, and training. Maybe if you put in enough time working at something, you would eventually become an expert.
Another blow to the foundations of the old views again came from chess. The first modern computers emerged during the 1940s. Within a couple of decades, there were computer programs that could beat an average-level human chess player. Soon after, computer programs could beat masters, and then grand masters.
Everyone had expected chess to be one of the last bastions of human supremacy over dumb beasts and machines. It was more surprising that computers had a lot more difficulty with the apparently simple game of Go, the ancient Asian board game, than with chess. That’s because skill at Go relies more on nonverbal spatial knowledge—where the tiny black-and-white pieces are located on the board—which is tough to program. That’s a theme we’ll encounter repeatedly throughout this story.

LESSON 5
Just because a human finds something difficult doesn’t mean that it necessarily is difficult.

The lessons from this research into the minds of experts weren’t lost on applied researchers and industry. In fact, this research was leading to the next inevitable step: taking what we had elicited from human experts and using it to help us program computers to work better with humans or to perform tasks too tedious or complex for humans. In this chapter, I’d like to show you what scientists like me did with what they learned. In some cases, what we knew about experts helped us enormously. In others, it left us wondering if we didn’t need to know still more about how experts think.
In the 1970s psychologists such as Daniel Kahneman, Paul Slovic, and Amos Tversky conducted a campaign parallel to the expertise research, looking at the types of mistakes humans make. They found that for some problems, simple mathematical linear equations could outperform human experts. By that, we mean that a small piece of software could predict better than an experienced bank manager whether or not a particular customer would default on a loan, predicting it more accurately, more reliably, and much more cheaply. That finding ended up with a lot of middle-ranking bank managers losing their jobs and being replaced by software. If you apply for a loan today, the decision on your application will almost certainly be made by a piece of software. Similarly, if there’s a suspicious pattern of activity with your credit card, it will probably be spotted by a piece of software, which will trigger a check by a human being to make sure that your card or its number hasn’t been stolen.
That finding had implications for other areas, such as medicine. Researchers wanted to create software systems that diagnosed illnesses the way doctors did. But the researchers wondered if the systems they built could go beyond mere imitation and actually improve the rate of correct diagnoses for life-threatening illnesses. The early signs looked promising. In the early 1970s, scientists at Stanford University developed a program called MYCIN, which could diagnose some categories of infectious diseases more accurately and more reliably than expert human diagnosticians. A doctor had to answer a series of simple questions about a particular case in order for the system to produce a diagnosis. This was so successful that by the 1980s, expert systems were already outperforming human experts in a range of areas, and it looked as if a new era of software-supported medicine was about to dawn. But then reality got in the way, as a slew of problems emerged.
In the fall of 1986, I moved to Nottingham to begin working on a problem that was causing a lot of difficulty for expert systems. It’s known as the “knowledge acquisition bottleneck,” and I was to encounter it repeatedly in the years ahead. It’s about extracting knowledge from human beings to put into the expert system. To write a robust piece of software, you need to base it on solid knowledge from the real world. Getting at that solid knowledge was turning out to be a harder task than anyone had anticipated.
In the old days, software was built using what is known as the waterfall model. The software developer would interview the client, and then draw up a document that specified in detail what the system would do. Once that agreement was signed, the software developer went away and built the software. The clients weren’t involved in the build; when they put their signatures on the contract, they were committed to that plan, as irrevocably as a log going over a waterfall.
Developers got information out of the experts via the traditional interview, and most software developers believed that this worked just fine. If the client forgot to mention an important requirement during the interview, that was the client’s problem, and the developer would pick up a further fee for fixing the problem. But when the software keeps having problems after delivery, or each new version has new problems, or the software simply doesn’t do what the client wants, the limitations of this approach become obvious.
The first expert systems were built in one of two ways. Some were built by people who had come into the field from traditional software development; they used interviews because interviews were the only approach they had ever known. Others were built by what are known as domain experts: people who were already experts in the relevant area and who had then learned how to build expert systems. These people wrote their own knowledge into the software, without needing to interview anyone. Whichever route was used, the early expert systems could perform better than human experts for tightly defined problem areas. But when expert systems developers tried to scale up their systems to tackle bigger problems, it become clear that neither approach was going to work. The problems arose from simple practicality. There are some people who are willing and able to learn the skills needed to build an expert system, but there aren’t nearly enough for this to be a viable approach in most fields. If expert systems were going to be used on a wide scale, it was preferable to have them developed by specialized expert systems developers who acquired knowledge for each new field from human experts and whatever other sources were available.
However, the problem with the interview approach was that interviews were missing too much. They are fine for some purposes, and they seem easy to use. Most people think of them as the obvious and most sensible way to gather information. But in fact, the word interview can mean a lot of things, all of them with limitations. The classic distinction is between structured and unstructured interviews. In a structured interview, the interviewer has a list of prepared questions, and a flowchart of follow-up questions is triggered if the interviewee gives a particular answer. It all looks and sounds scientific, but the result is dependent on having the right list of questions, phrased in the right way, with the right options available for the follow-ups. By definition, when you’re gathering knowledge for a new expert system, you can’t know what the right questions are, or the right phrasings, or the right options. If you know enough to design a structured interview, you probably already have all the knowledge you need to build the expert system: a classic catch-22.
At the other end of the scale, there’s the unstructured interview. This has been satirically summed up by a cynical software developer: “Okay, tell me everything you know about Ford fuel injection systems.”
These problems are well known in fields like psychology, which deal with extracting knowledge and beliefs from human beings. So it was no accident that while at Nottingham I worked in the department of psychology, within a research group specializing in artificial intelligence.
Card Sorts and Laddering
If you interview experts about their specialist areas, sooner or later they’ll mention something they have never told you about before. Once, when I was collecting data at Nottingham, I asked a geologist how he could identify a particular type of rock in the field. The geologist went into great detail about rock identification. At one point, speakin...

Table of contents