SECTION II
Cognitive Biases and Their Importance
CHAPTER 9
Introduction
Cognitive bias is a âsystematic error in judgment and decision making common to all human beings, which can be due to cognitive limitations, motivational factors, and/or adaptations to natural environments.â1 Some of the logical fallacies discussed earlier in this book can be a source of cognitive bias, such as interpreting correlation as causation in the Affirming the Consequent fallacy. Other sources are covered in some of the Think Better sections (see the discussion on Behavioral Economics). But could another source be the very evolutionary processes that created us homo sapiens?
Having a comprehensive understanding of all these sources of cognitive biases helps us develop âepistemic humility,â which is being humble about the knowledge we think we possess, and which is a necessary requirement for being a good critical thinker.
The remainder of this section is excerpted from Kevin deLaplanteâs discussion of cognitive biases, which can be found on his Critical Thinking Academy website.2 We include it in our book because it is an excellent and relevant discussion of this subject matter.
Additionally, at various points in this section, we include shaded sidebar discussions focusing on how deLaplanteâs article relates to critical thinking in marketing.
* * *
CHAPTER 10
What They Are and Why Theyâre Important
Everyone agrees that logic and argumentation are important for critical thinking. (And an important component of improving oneâs critical thinking skills is) background knowledge ....
There are different types of background knowledge that are relevant to critical thinking in different ways. One of the most important types of background knowledge is knowledge of how our minds actually workâhow human beings actually think and reason, how we actually form beliefs, how we actually make decisions.
There are a lot of different scientific fields that study how our minds actually work. These include behavioral psychology, social psychology, cognitive psychology, cognitive neuroscience, and other fields. Over the past 40 years weâve learned an awful lot about human reasoning and decision making.
A lot of this research was stimulated by the work of two important researchers, Daniel Kahneman and Amos Tversky, going back to the early 1970s. They laid the foundations for what is now called the âbiases and heuristicsâ tradition in psychology.1
Normative Versus Descriptive Theories of Human Reasoning
To get a feel for the importance of this research, letâs back up a bit. When studying human reasoning you can ask two sorts of question. One is a purely description questionâhow do human beings in fact reason? The other is a prescriptive or normative questionâhow should human beings reason? Whatâs the difference between good reasoning and bad reasoning?
When we study logic and argumentation, weâre learning a set of rules and concepts that permit us to answer this second questionâhow should we reason ....
... [O]ver time, weâve developed a number of different theories of rationality that give us norms for correct reasoning in different domains.
This is great, of course, (as) these are very powerful and useful tools. (Some of which are the focus of this book).
Now, when it comes to the study of how human reasoning actually works, before Kahneman and Tverskyâs work in the 1970s, there was a widely shared view that, more often than not, the mind, or the brain, processes information in ways that mimic the formal models of reasoning and decision making that were familiar from our normative models of reasoning, from formal logic, probability theory, and decision theory.
This âwidely shared viewâ has influenced the methods marketing researchers often use. For example, three commonly used methods researchers employ to model brand choice are (1) conjoint analysis, (2) regression analysis, or (3) the combination of brand attribute performance ratings with attribute âimportanceâ ratings. Findings from behavioral economics suggest that these kinds of models do not tell us the complete storyâin fact they may weave a false storyâof how consumers select brands. In short, consumers are not as ârationalâ as we think they are.
What Kahneman and Tversky showed is that, more often than not, this is NOT the way our minds workâthey showed that thereâs a gap between how our normative theories say we should reason and how we in fact reason.
This gap can manifest itself in different ways, and thereâs no one single explanation for it. One reason, for example, is that in real-world situations, the reasoning processes prescribed by our normative theories of rationality can be computationally very intensive. Our brains would need to process an awful lot of information to implement our best normative theories of reasoning. But that kind of information processing takes time, and in the real world we often need to make decisions much quicker, sometimes in milliseconds. You can imagine this time pressure being even more intense if you think about the situations facing our homo sapiens ancestors; if thereâs a big animal charging you and you wait too long to figure out what to do, youâre dead.
This is an important point for marketers. Often, marketing managers need to make relatively quick decisions based on either too little or too much information. This frequently leads to using ârules of thumbâ that we fall back on to save time. These shortcuts in making decisions are called âheuristicsââpractical, time-saving processes used to make quick decisions that are not necessary optimal or perfect. Sometimes, these heuristics employ one or more of the 60 logical fallacies discussed in this bookâand when they do, you can be certain that the likelihood of a bad decision being made will increase. Note that a âheuristicâ can also refer to ways âfor thinking about phenomena or questions in a way that might give you new insights and ideas,â 2 which can be used in argument development. Our Think Better piece on The Five Whys is an example.
Biases and Heuristics (Rules of Thumb)
So, the speculation is that our brains have evolved various short-cut mechanisms for making decisions, especially when the problems weâre facing are complex, we have incomplete information, and thereâs risk involved. In these situations we sample the information available to us, we focus on just those bits that are most relevant to our decision task, and we make a decision based on a rule of thumb, or a shortcut, that does the job.
These rules of thumb are the âheuristicsâ in the âbiases and heuristicsâ literature.
Two important things to note: One is that weâre usually not consciously aware of the heuristics that weâre using, or the information that weâre focusing on. Most of this is going on below the surface.
The second thing to note is these heuristics arenât designed to give us the best solutions to our decision problems, all things considered. What theyâre designed to do is give us solutions that are âgood enoughâ for our immediate purposes.
But âgood enoughâ might mean âgood enough in our ancestral environments where these cognitive mechanisms evolved.â In contexts that are more removed from those ancestral environments (say in a marketing committee), we can end up making systematically bad choices or errors in reasoning, because weâre automatically, subconsciously invoking the heuristic in a situation where that heuristic isnât necessarily the best rule to follow.
So, the term bias in this context refers to this systematic gap between how weâre actually disposed to behave or reason, and how we ought to behave or reason, by the standards of some normative theory of reasoning or decision-making. The âheuristicâ is the rule of thumb that weâre using to make the decision or the judgment; the âbiasâ is the predictable effect of using that rule of thumb in situations where it doesnât give an optimal result.
Some examples of heuristics in marketing decision-making weâve observed are as follows:
âWhatâs happened in the past will happen tomorrowâ: For example, making financial projections based simply on historical trends and not taking into account other factors that can affect future financial performance, such as projected GNP growth or the prime interest rate.
Argument to moderation: âSplitting the differenceâ between alternative marketing projections (e.g., a sales forecast) because one does not have sufficient time to develop a better forecast supported by good evidence and logic.
âShooting from the hipâ: Simply guessing what action to take because one does not have enough time to properly investigate a particular issue.
Alleged certainty: Relying on conventional wisdomââEveryone knows that ...ââto make a decision as opposed to thinking through a problem in greater detail.
An Example: The Anchoring Effect
This is all pretty general, so let me give you an example of a cognitive bias and its related heuristic. This is known as the anchoring heuristic, or the anchoring effect.
Kahneman and Tversky did a famous experiment in the early 1970s where they asked a group of subjects to estimate the percentage of countries in Africa that are members of the United Nations. Of course most arenât going to know this, for most of us this is just going to be a guess.
But for one group of subjects, they asked the question âIs this percentage more or less than 10 percent?â For another group of subjects, they asked the question âIs it more or less than 65 percent?â
The average of the answers of the two groups differed significantly. In the first group, the average answer was 25 percent. In the second group, the average answer was 45 percent. The second group estimated higher than the first group.
Why? Well, this is what seems to be going on. If subjects are exposed to a higher number, their estimates were âanchoredâ to that number. Give them a high number, they estimate higher; give them a lower number, they estimate lower.
So, the idea behind this anchoring heuristic is that when people are asked to estimate a probability or an uncertain number, rather than try to perform a complex calculation in their heads, they start with an implicitly suggested reference pointâthe anchorâand make adjustments from that reference point to reach their estimate. This is a shortcut; itâs a rule of thumb.
Now, you might think in this case, itâs not just the number; itâs the way the question is phrased that biases the estimates. The subjects are assuming that the researchers know the answer and that the reference number is therefore related in some way to the actual answer. But researchers have redone this experiment many times in different ways.
In one version, for example, the subjects are asked the same question, to estimate the percentage of African nations in the United Nations, but before they answer, the researcher spins a roulette wheel in front of the group, wait for it to land on a number so they can all see the number, and then ask them if the percentage of African nations is larger or smaller than the number on the wheel.
The results are the same. If the number is high people estimate high, if the number is low people estimate low. And in this case the subjects couldnât possibly assume that the number on the roulette wheel had any relation to the actual percentage of African nations in the United Nations. But their estimates were anchored to this number anyway.
Results like these have proven to be really important for understanding how human beings process information and make judgments on the basis of information. The anchoring effect shows up in strategic negotiation behavior, consumer shopping behavior, in the behavior of stock and real estate marketsâit shows up everywhere; itâs a very widespread and robust effect.
Note, also, that this behavior is, by the standards of our normative theories of correct reasoning, systematically irrational.
Linda Henman, Missouri-based business consultant, provides an example of when the anchoring effect can have a negative outcome: If a team leader asks his subordinates whether marketing efforts in a particular region should be increased by, say 20 percent, his employees will use that number as a cue. Perhaps the figure should be even higher, Henman notes, or the company needs to eliminate marketing efforts in that region altogether. By using the anchor of 20 percent, the boss has already planted a seed in his subordinatesâ minds, which will be difficult to erase.3
Why This Is Important
So this is an example of a cognitive bias. Now, this would be interesting but not deeply significant if the anchoring effect was the only cognitive bias that weâve discovered. But if you go to the Wikipedia entry under âlist of cognitive biases,â youâll find a page that lists just over a hundred of these biases, and the list is not exhaustive. I encourage everyone to check it out.
So whatâs the upshot of all this for us as critical thinkers?
At the very least, we all need to acquire a certain level of cognitive bias âliteracy.â We donât need to become experts, but we should all be able to recognize the most important and most discussed cognitive biases. We should all know what âconfirmation biasâ is, what âbase rate biasâ is, what the âgamblerâs fallacyâ is, and so on. These are just as important as understandin...