The Psychology of Counterfactual Thinking
eBook - ePub

The Psychology of Counterfactual Thinking

  1. 192 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Psychology of Counterfactual Thinking

About this book

This book provides a critical overview of significant developments in research and theory on counterfactual thinking that have emerged in recent years and spotlights exciting new directions for future research in this area. Key issues considered include the relations between counterfactual and casual reasoning, the functional bases of counterfactual thinking, the role of counterfactual thinking in the experience of emotion and the importance of counterfactual thinking in the context of crime and justice.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access The Psychology of Counterfactual Thinking by David R. Mandel,Denis J. Hilton,Patrizia Catellani in PDF and/or ePUB format, as well as other popular books in Psychology & History & Theory in Psychology. We have over one million books available in our catalogue for you to explore.

Information

Part I
Counterfactuals, causality, and mental representation

1 Counterfactual and causal explanation
From early theoretical views to new frontiers

David R. Mandel
In everyday and not-so-everyday life, we encounter situations that seem to demand an explanation of why something happened, how it happened, or how it could have been prevented. For example, following the 9/11 terrorist attacks, many people sought explanations for each of these questions. Explanations of why it happened have focused on Islamic fundamentalism and US hegemony in world politics. Explanations of how it happened, by contrast, have focused on the actions of the terrorists and their accomplices who were involved in instigating or directly carrying out the attacks. Differently still, explanations of how the attacks might have been prevented have focused on errors of judgment and ineffectual policies of US government agencies such as the CIA and the FBI that bore responsibility for preventing such attacks. As the example illustrates, explanations are “tuned” by the type of question they are meant to address. They are meant to be relevant, not “merely” true or probable (Hilton and Erb 1996).
That causal thinking plays a key role in the explanation process may seem obvious. After all, how and why questions are causal questions. As Kelley (1973: 107) put it, “Attribution theory is a theory about how people make causal explanations, about how they answer questions beginning with ‘why?’” Less obvious, perhaps, is the role that counterfactual thinking may play in that process. Yet, over the past two decades psychologists have proposed that counterfactual thinking does indeed play a key role. In this chapter, I examine some of the theoretical claims, critiques, and reconciliation attempts that have emerged from this literature. Although I draw on evidence from various pertinent sources, my focus in this chapter is on reasoning directed at explaining an effect in a specific case. Readers interested in the role of counterfactual reasoning about causal laws might consult Tetlock and Belkin (1996a).

Early theoretical views

Early claims regarding the effect of counterfactual thinking on causal explanation have focused on two interrelated routes of influence: (1) the selection of contrast cases used to define the effect to be explained and (2) counterfactual conditional simulations used to test the plausibility of particular hypothesized causes (for reviews, see Spellman and Mandel 1999, 2003). I discuss these proposed routes in the subsections below.

Contrastive counterfactual thinking

The desire to explain is roughly proportional to the perceived discrepancy between expectancies and outcomes. When our expectancies are confirmed, there is little need for explanation. But, when expectancies are disconfirmed, they are likely to trigger spontaneous searches for causal explanations (Hastie 1984; Kanazawa 1992; Weiner 1985). The persistence of attention to disconfirmed expectancies is, by definition, then, a form of counterfactual thinking. Contrastive in nature, counterfactuals of this sort recapitulate expectancies that are juxtaposed against the reality of surprising outcomes. Although theorists tend to associate close counterfactuals with the term almost (Kahneman and Varey 1990; see also Chapter 8, Teigen) and counterfactual conditionals with the term if only and, more recently, even if, contrastive counterfactuals have not been provided with a natural-language marker. I propose that the term rather than might be appropriate in this regard. That is, contrastive counterfactuals often convey if not in form, then in gist, the following: “This unexpected event X occurred rather than Y, which I expected to occur instead.”
As many theorists have proposed, contrastive counterfactuals play an important role in causal selection by defining the nature of the effect to be explained (e.g., Einhorn and Hogarth 1986; Gorovitz 1965; Hesslow 1983; Hilton 1990; Mackie 1974). As Kahneman and Miller (1986) noted, in situations in which an outcome is viewed as normal in the circumstances – and hence expected – it is reasonable to answer the question “Why?” with the reply “Why not?” In such cases, no effect is meaningfully defined. The why question therefore presupposes a deviation between occurrence and what had been expected to occur by an explainee: “Why did this happen rather than what I expected would happen?” Theorists have proposed that, as a general rule, counterfactual thinking will recruit contrast cases that restore normality because people expect normal events to occur (Hart and HonorĂ© 1985; Hilton and Slugoski 1986; Kahneman and Miller 1986). In the present context, normal not only means what was likely or is frequent, but also what is normative in the circumstances (McGill and Tenbrunsel 2000; Chapter 11, Catellani and Milesi).
These norm-restoring “downhill climbs” (Kahneman and Tversky 1982a) assist in defining a backgrounded set of factors – or causal field in Mackie’s (1974) terms – that are assumed to be common to both the factual case and the contrastive counterfactual set of cases and that are ruled out as causal candidates. Therefore, to the extent that contrastive counterfactual thinking plays a role in defining a causal field, it can be said to play an important role in determining what would not normally be deemed the cause. As we shall see next, counterfactual thinking has also been ascribed a more positive role in the process of causal explanation.

Counterfactual conditional simulations

While disconfirmed expectancies may be deemed counterfactual, not all counterfactuals merely recapitulate disconfirmed expectancies. An important function of counterfactual thinking, which Kahneman and Tversky (1982a) brought to psychologists’ research attention, is that it allows people to run “if–then” simulations in working memory, which can allow us to explore our own intuitions about how manipulations to aspects of a case might have influenced how the case would have subsequently unfolded.
Accordingly, the second proposed route of counterfactual influence on causal explanation involves the idea that counterfactual “if–then” or “even if–then” simulations can be used to identify – perhaps even verify – various causal contingencies that may later be deemed “the cause” (e.g., Lipe 1991; Mackie 1974; McGill and Klein 1993; Roese and Olson 1995a; Wells and Gavanski 1989). These conditional representations may be regarded as generic response forms to the counterfactual question “Would Y have happened if X had not?” For instance, according to this “counterfactual simulation account,” a student who learns that she failed an exam and who thinks “If I had studied harder, I would have passed the exam” will be more likely to view her lack of preparation as a cause (if not “the” cause) of her performance than a student in a comparable position who instead thinks “Even if I had studied harder, I still would have failed” (McCloy and Byrne 2002).
According to Mackie (1974), the counterfactual test in which the cause is negated is crucial because the very meaning of the expression “X caused Y” is, first, that X and Y in fact happened and, second, that had X not happened, Y also would not have happened. This idea has been recurrent in psychological literature linking counterfactual thinking and causal explanation. For instance, Kahneman and Tversky (1982a: 202) proposed that “to test whether event A caused event B, we may undo A in our mind, and observe whether B still occurs in the simulation.” More forcefully, Wells and Gavanski (1989: 161) proposed that “an event will be judged as causal of an outcome to the extent that mutations to that event would undo the outcome” (italics mine).
Roese and Olson (1995a: 11) took the argument even further, claiming that although “not all conditionals are causal . . . counterfactuals, by virtue of the falsity of their antecedents, represent one class of conditional propositions that are always causal” (italics mine). These authors explain that “[t]he reason for this is that with its assertion of a false antecedent, the counterfactual sets up an inherent relation to a factual state of affairs” (1995a: 11). Mental model theorists (Byrne and Tasso 1999; Thompson and Byrne 2002) have similarly proposed that, whereas counterfactual conditionals automatically recover factual models, thus establishing a salient contrast case, factual conditionals do not automatically recover counterfactual models because people do not spontaneously represent false events. Thus, no contrast case would be evoked unless the implicit models were deliberatively unpacked. These later accounts suggest not only that counterfactual thinking may influence causal explanation, but that counterfactual thinking will have a stronger influence on causal explanation than factual thinking.

Empirical and theoretical challenges

Although some evidence supporting the idea that causal judgments (and related attributions) are influenced by counterfactual thinking has accrued (e.g., see Branscombe et al. 1996; Roese and Olson 1997; Wells and Gavanski 1989), most of it is based on studies that have manipulated the mutability of antecedents or outcomes in a given case (but see Chapter 10, Dhami et al.). Outcome mutability has been manipulated by constructing different versions of scenarios in which alternatives to a chosen option would have led either to the same outcome or to a better outcome (Wells and Gavanski 1989). Antecedent mutability has been manipulated, for instance, by varying the abnormality (Kahneman and Tversky 1982a) or controllability (Mandel and Lehman 1996) of antecedents in scenarios. The core assumption underlying such research has been that if the relevant manipulation influenced judgment, then it must have been mediated by counterfactual thinking.
Other research (e.g., Davis et al. 1995; N’gbala and Branscombe 1995), however, suggests that the effect of “mutability manipulations” on causal judgments may have had more to do with the particular hypothetical scenarios that had been used in previous research than with a robust effect of counterfactual thinking on causal judgment. For example, Mandel and Lehman (1996: Experiment 3) demonstrated that, when participants read about a hypothetical case that afforded the opportunity to make different counterfactual and causal selections, antecedent mutability manipulations influenced participants’ counterfactual listings, but these same manipulations did not influence participants’ causal judgments. Moreover, the antecedent that was perceived as most causal differed from that which was most frequently mutated as a way of undoing the outcome.
Trabasso and Bartolone (2003) pointed out that manipulations of antecedent normality in past studies are confounded with the extent to which such antecedents were themselves explained in the relevant scenarios. These authors independently manipulated level of explanation and normality in Kahneman and Tversky’s (1982a) “Mr Jones” car-accident scenario and asked participants to rank the likely availability of four counterfactual “if only” statements that mutated either the route Jones took, the time he left work, his decision to brake at the yellow light, or the other vehicle charging into the intersection. Calling into question the relation between normality and counterfactual availability, Trabasso and Bartolone found that counterfactual rankings were influenced by level of explanation only. Antecedent normality had no effect on participants’ judgments of how likely it would be that a given counterfactual statement would be generated.
Another set of studies (Mandel 2003b) examined the idea that counterfactual thinking about what could have been has a stronger effect on attribution than factual thinking about what was. For example, participants in Experiment 2 first recalled an interpersonal event they had recently experienced and were then instructed either to think counterfactually about something they (or someone else) might have done that would have altered the outcome or to think factually about something they (or someone else) did that contributed to how the outcome actually occurred. Participants rated their level of agreement with causality, preventability, controllability, and blame attributions, each of which implicated the actor specified in the thinking manipulation. Compared to participants who received no thinking directive, participants in the factual and counterfactual conditions reported more extreme attributions. However, mean agreement did not differ between the factual and counterfactual conditions – a finding that was replicated in two other experiments (cf. Mandel and Dhami in press; Tetlock and Lebow 2001).

Counterfactual tests of necessary causes

Another problem faced by counterfactual simulation accounts is that they imply that causal reasoners assign greater weight to causes that are necessary rather than sufficient to explain the relevant effect or yield the focal outcome. Strictly speaking, X is a necessary cause of Y, if X is implied by Y and, in contrapositive form, if ¬X implies ¬Y (the symbol ¬ is read as “the negation of”).1 By contrast, X is a sufficient cause of Y, if X implies Y and, in contrapositive form, if ¬X is implied by ¬Y (Cummins 1995; Fairley et al. 1999). However, as Mackie (1974) explained, the concept of a causal field requires a qualified interpretation of these relations, such that necessity and sufficiency are interpreted as meaning necessary or sufficient in the circumstances, where the latter qualification may be interpreted as meaning “given the presence of all the events in the case that are backgrounded.”
On either interpretation, counterfactual conditionals that proceed by negating a hypothesized cause provide information relevant to the assessment of whether that factor was necessary to bring about the effect. I regard this as one of the principal limitations of counterfactual simulation accounts because there is mounting evidence that what is meant by the term cause in everyday discourse tends to reflect sufficiency in the circumstances rather than necessity in the circumstances. Indeed, even Mackie (1974: 38), a key proponent of the “necessary cause” view, wrote:
There is, however, something surprising in our suggestion that “X caused Y” means, even mainly, that X was necessary in the circumstances for Y. Would it not be at least as plausible to suggest that it means that X was sufficient in the circumstances for Y? . . . After all, it is tempting to paraphrase “X caused Y” with “X necessitated Y” or “X ensured Y,” and this would chime in with some of the thought behind the phrase “necessary connection”. But if “X necessitated Y” is taken literally, it says that Y was made necessary by X or became necessary in view of X, and this would mean that X was sufficient rather than necessary for Y.
One of the key problems with treating “X caused Y” as meaning, primarily, that “had X been absent, Y would not have occurred” is that the definition is too inclusive (Lombard 1990). As Hilton et al. (Chapter 3 below) put it, “Th[e] plethora of necessary conditions brings in its train the problem of causal selection, as normally we only mention one or two factors in a conversationally given explanation . . .” Contrary to this reasonable proposal, according to the “negate-X, verify-Y” counterfactual criterion, oxygen would be the cause of all fires, and birth would be the cause of all deaths. Clearly, statements such as these violate our basic understanding of what causality means. Although we can all agree that birth is necessary for death, few would say the latter is brought about by the former. It is the quality of being instrumental in “bringing about,” even if not without the assistance of a background set of enabling conditions, which suggests that “X caused Y” means X was sufficient in the circumstances for Y to occur.
Since Mackie posed the preceding question, psychologists have conducted considerable research to empirically address it. Studies of naïve causal understanding indicate that people define causality primarily in terms of sufficiency. For example, Mandel and Lehman (1998: Experiment 1) asked participants to provide open-ended definitions of the words cause and preventor. They found that a majority of participants defined cause (71 percent) and preventor (76 percent) in terms of sufficiency (e.g., “if the cause is present, the effect will occur”). By contrast, only a minority defined these concepts in terms of necessity (22 percent and 10 percent for cause and preventor, respectively; e.g., “if the cause is absent, the effect won’t occur”). Although Mandel and Lehman (1998) did not report the cross-tabulated frequencies of response, a re-analysis of the dataset revealed an interesting result. Without exception, the minority of participants who provided a necessity definition also provided a sufficiency definition for the same term. That is, not a single participant in their study provided a definition of causality or preventability only in terms of necessity, whereas the majority provided definitions that focused exclusively on sufficiency.
Given the possibility for bias in coding of open-ended responses, Mandel (2003c: Experiment 2) attempted to replicate these findings by asking participants whether they thought the expression “X causes Y” means “When X happens, Y also will happen” (i.e., X is sufficient to cause Y) or “When X doesn’t happen, Y also won’t happen” (i.e., X is necessary to cause Y). Eighty-one percent of the sample interpreted the causal phrase in terms of sufficiency and a comparably high percentage (84.5 percent) thought that other people would do so too.
Goldvarg and Johnson-Laird (2001) provide converging support for the sufficiency view. Their research examined the types of mental models that people view as being consistent with expressions like “...

Table of contents

  1. Cover Page
  2. The Psychology of Counterfactual Thinking
  3. Routledge research international series in social psychology
  4. Title Page
  5. Copyright Page
  6. Figures
  7. Tables
  8. Contributors
  9. Introduction
  10. Part I Counterfactuals, causality, and mental representation
  11. Part II Functional bases of counterfactual thinking
  12. Part III Counterfactual thinking and emotion
  13. Part IV Counterfactual thinking in the context of crime, justice, and political history
  14. References