Cognition as Intuitive Statistics
eBook - ePub

Cognition as Intuitive Statistics

  1. 214 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Cognition as Intuitive Statistics

About this book

Originally published in 1987, this title is about theory construction in psychology. Where theories come from, as opposed to how they become established, was almost a no-man's land in the history and philosophy of science at the time. The authors argue that in the science of mind, theories are particularly likely to come from tools, and they are especially concerned with the emergence of the metaphor of the mind as an intuitive statistician.

In the first chapter, the authors discuss the rise of the inference revolution, which institutionalized those statistical tools that later became theories of cognitive processes. In each of the four following chapters they treat one major topic of cognitive psychology and show to what degree statistical concepts transformed their understanding of those topics.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Cognition as Intuitive Statistics by Gerd Gigerenzer,David J. Murray in PDF and/or ePUB format, as well as other popular books in Psychology & Cognitive Psychology & Cognition. We have over one million books available in our catalogue for you to explore.

1 The Inference Revolution

From Tools to Theories: Scientists' Instruments as Metaphors of Mind

Metaphors

Metaphors have played their role in the development of all sciences. Charles Darwin, for example, took at least two human activities as metaphors for natural selection, namely, "artificial selection" and "war" (Gruber, 1977). Psychological thinking has been shaped by many a metaphor. Consider the case of memory.
Possibly the oldest metaphors to be found in psychology are those of Plato, who, in the Theaetetus likened the impression of a "memory" on the "mind" to the impression of a seal or stylus on a wax tablet. In the Meno he also drew the analogy between the mind full of memories and an aviary full of flying birds: Trying to retrieve a memory is like trying to capture a bird in flight—one knows it is there, but it is not easily caught. Through the ages many other metaphors have been introduced for the understanding of memory. St. Augustine likened it to a storehouse, and the word "store" took firm root in the vocabulary of memory theory. More recent metaphors include analogies between memory and houses, gramophones, computer programs, libraries, tape recorders, holograms, and maps (Roediger, 1980).
What is a metaphor? First, it consists of a subject and a modifier (Beardsley, 1972). In the statement "man is a machine," "man" is the subject and "is a machine" is the modifier. Second, a metaphorical statement differs from a literal one ("man is a vertebrate") by virtue of a certain tension between subject and modifier. The "mind is a statistician" reflects such a tension. Third, in contrast to assertions that are merely odd, metaphorical assertions are intelligible and acceptable, even if somewhat deviant. In poems, another case where deviant discourse is important, strong use is made of metaphorical language, which may result in an effect of beauty, whereas in science it may result in new ways of thinking. Fourth, metaphors are not falsifiable. However, by narrowing the possible flow of connotations and associations with definitions and examples, a metaphor can be transformed into precise and testable statements. This transformation is conventionally called a model rather than a metaphor, since it has lost its vagueness, is elaborated in a certain way (there may be other elaborations leading to other models), and offers predictions. We may think of a model as a controlled and elaborated metaphor.
What is the use of a metaphor? Its use is to be found in the construction of theories, rather than in the way they are tested. This means a metaphor may stimulate new ways of looking at the subject matter and create new interpretations of it. A metaphor cannot give us new ideas about how to test theories, but there is a connection between metaphors and theory testing that, as far as we can see, is unique to psychology. This connection is the subject of this book: Statistical tools for testing hypotheses have been considered in a new light as theories of cognitive processes in themselves. Examples are Neyman and Pearson's statistical theory, which has become a theory of object detection, known as signal detection theory (see chapter 2), and R. A. Fisher's analysis of variance, which has become a theory of how causal attributions are made (see chapter 5). Both have stimulated immense amounts of research during recent decades.

The Evolution of Metaphors

Metaphors common in psychology have changed over time partly as a result of the invention of new machines such as the telegraph and telephone, which ultimately led to the analogy between a person and a communication system (Attneave, 1959). Since the middle of this century, the "evolution" of metaphors has tended to focus on the tools that the behavioural scientist himself uses. Two major tools have been considered as important candidates for analogies with cognitive processes: computers and statistics.
The invention of the computer had, among other effects, the consequence of giving the scientist the opportunity to describe processes in terms of programs, carry out involved calculations, and manipulate lists of data (files). Each of these three aspects has been used as a metaphor of cognitive functioning.
For instance, one metaphor connected cognitive processing with the flow charts useful for depicting the steps in a computer program. Broadbent (1958) produced the first modern flowchart of the organism: He argued that information was received at the sensory receptors in parallel and was then put into short-term memory, where selective attention operated to give certain items a particular degree of "processing." The processed information could either result in an overt response, be put into long-term memory, or be recycled into short-term memory for rehearsal or further cogitation. Moreover, the planning and execution of an act has been compared with the execution of a computer program. Various cognitive processes were reinterpreted as "searching" through lists or files, represented by flow charts containing steps at each of which a decision has to be made. Newell and Simon (1972) have exhaustively examined the question of how far human problem solving can be imitated by devising programs suitable for solving the same kinds of problem by computers.
This book, however, is not concerned with the computer metaphor, but with the second major tool that became a metaphor of mind, namely, statistics.

Statistical Tools as Cognitive Theories

Between 1940 and 1955 statistical theories became indispensable tools for making inferences from data to hypothesis in psychology. The general thesis of this book is that scientists' tools, which are considered to be indispensable and prestigious, lend themselves to transformation into metaphors of mind. We call this briefly the tools-to-theories hypothesis. In particular, we maintain that statistics and computers exemplify this hypothesis. We restrict the thesis to statistics and cognitive psychology only and are aware of the ambiguity inherent in the term "indispensable." However, in what follows we shall clarify the meaning of this term by showing how statistics became institutionalized in psychology.

Emergence of Statistical Inference

Statistical inference does not exhaust inference. From time immemorial not only scientists but persons from all walks of life have made inferences daily. Even after the introduction of statistical methods of inference many scientists—for example, physicists—have little or no recourse to them. In this section we discuss the inception of those major theories of statistical inference and hypothesis testing that have provided the armory for the inference revolution in psychology.
Neyman (1976) credits the mathematician and astronomer Pierre Laplace (1749–1827) with the first test of significance. In astronomy, the normal distribution was used as a model for errors of observation. The problem was, what to do with outlying observations, which the normal law makes highly improbable and which seem to be due to extraneous causes. Every experimenter knows this problem of outliers that seem to deviate too much from the others. Probabilistic criteria were developed for the rejection of outlying observations (Swijtink, 1987). When the probabilistic ideas of the astronomers were transferred by Adolphe Quetelet into the social sciences, an important shift in interpretation took place. Whereas the astronomers usually inferred from a discrepancy between model and data that the discordant observations had to be rejected, social scientists usually concluded instead that the model had to be rejected. We shall return to this shift later, in our discussion of Sir Ronald A. Fisher's statistical ideas.
In fact, the first significance test seems to have been published about 100 years before Laplace's in 1710 by John Arbuthnot. The form of Arbuthnot's argument is strikingly similar to modern null hypothesis testing. However, since the content is so foreign to our 20th century concerns, his memoir reveals the pitfalls of this form of statistical inference more clearly than more recent examples.

The First Test of a Null Hypothesis

Arbuthnot held that the external accidents to which males are subject are far more dangerous than those which befall females. In order to repair the resulting loss of males, "provident Nature, by the Disposal of its wise Creator, brings forth more Males than Females and that in almost constant proportion" (p. 188). Arbuthnot favored this hypothesis of an intervening God over the hypothesis of mere chance—in modern terms, the null hypothesis. (He understood mere chance as implying equal chances for both sexes.) His data were 82 years of birth records in London (1629–1710), in which in every year the number of male births exceeded the female births. He calculated the expectation (the concept of probability was not yet fully developed at the time) of this data given the chance hypothesis which is (1/2)82. Because this expectation was astronomically small, he concluded "that it is Art, not Chance, that governs" (p. 189).
In a manner similar to that used in modern psychology, he thus rejected a null hypothesis in which he had never believed. Let us bypass the small errors in his argument (first, any data would have had the expectation (1/2)82, even 41 female-predominant and 41 male-predominant years; second, a chance mechanism with a probability of 18/35 for male births would fit his data well), and turn immediately to his discussion:
From hence it follows, that Polygamy is contrary to the Laws of Nature and Justice, and to the Propagation of the Human Race; for where Males and Females are in equal number, if one man takes Twenty Wives, Nineteen Men must live in Celibacy, which is repugnant to the Design of Nature; nor is it probable that Twenty Women will be so well impregnated by one Man as by Twenty. (p. 189).
The inference in this first test of significance (of course, Arbuthnot did not use the term) was from the data to the existence of divine tinkering and in this sense constituted a "proof" of the existence of an active God. The parallel to the modern use (and abuse) is striking. A common practice today—one which, as we shall soon see, is statistically unsound—is to infer from the rejection of a specified null hypothesis the validity of an unspecified alternative hypothesis and to infer from this, as Arbuthnot did, the existence of a causal mechanism responsible for that deviation from "chance." In modern terms, Arbuthnot's chance hypothesis is a point hypothesis, that is, it specifies an exact value, whereas his Wise Creator hypothesis covers all other probability values except that single one. This is the first example of asymmetric hypothesis testing we know about, and it amply indicates the problems arising when alternative hypotheses are not specified.
The first modern significance test was the chi-square method developed by Karl Pearson (1900). One of the first questions to which Pearson applied his test was whether the inference that an empirical distribution is a normal distribution can be justified. As a result, Pearson's belief in the normal law as a law of nature decreased considerably. In this example, an inference from data to a hypothesis is attempted. However, as the earlier example from astronomers showed, there are other types of inference, such as inference from hypothesis to data.
Let us turn to the origins of psychology's inferential statistics. We consider three widely divergent views about the nature of statistical inference; those of Bayes, Fisher, and Neyman and Pearson. As we show later, what is taught in psychology as "inferential statistics" is in fact none of these theories, but a hybrid of ideas stemming mostly from the latter two views sometimes supplemented by a Bayesian interpretation. We shall describe these ideas only insofar as they have been incorporated into psychology and transformed into metaphors of mind. In contrast to contemporary textbooks, we shall emphasize the controversial nature of these statistical theories of inference, and the nonexistence of an agreed upon solution for formal inference outside psychology. This contrasts starkly with the presentation of "inferential statistics" since the early 40s in psychology as the uncontroversial and objective technique of inductive inference, one that can be used mechanically.
There are two poles between which ideas about the nature of inductive inference can be located. One considers inference from data to hypothesis as an informal cognitive process, based on informed judgment, and therefore strongly dependent on the content of the problem and one's specific experience with that content. According to this view, inferences are not independent of the content of the problem. Therefore, it makes little sense to apply the same formal rule of inference to every problem mechanically. This nonformal view, for example, is maintained by physcists and other natural scientists, as opposed to most social scientists. The second view considers inference as a process that can be described by a single formal rule, which can be applied independent of the specific-content investigated. Probability theory has been the single candidate for all such attempts to formalize inductive inference. Since probability theory was conceptualized only around 1660 (Hacking, 1975), the problem of formal inductive inference, or induction, is rather recent, and it seems to be the only major problem in philosophy that is of modern rather than ancient origin.

Bayes

It is not surprising that one of the first attempts to formalize inference came from Laplace. He proposed the following rule of succession (see Keynes, 1943, pp. 367–383):
p(H|n) = (n + 1)/(n + 2), (1.1)
where p(H|n) is the posterior probability of the hypothesis H that an event x will happen if the event has been observed n times successively. For instance, if you were 20 years old, you would have observed approximately 7,300 times that the sun rises in the morning. According to the rule of succession, your probability for believing in a sunrise tomorrow should be roughly (7300 + 1)/(7300 + 2), that is it approaches certainty. If something new happens to you and you have no information about the frequency of that event, you should infer that the event will occur again with a probability p(H|n = 1) = 2/3. Thus, if you play a lottery (without knowing the chances) for the first time and win the first prize, you should believe that if you play again tomorrow, you will win the first prize again with a probability of 2/3. Similarly, if you perform a new experiment and obtain result x, then you should expect to find x in a replication with the probability 2/3.
The rule has been criticized since Laplace's time as making too much out of almost nothing (Keynes, 1943). The criticism exposes a fundamental problem associated with such simple formal rules of inference: the neglect of context and content. (The sun may appear to go on or cease rising, depending whether you are in Paris or in Greenland. The probability that the experimental result will be found again may depend heavily on the specific content being investigated.)
Nevertheless, not only Laplace but such well-known statisticians as Karl Pearson believed in the rule of succession as a valid formula for inferring the future ...

Table of contents

  1. Cover
  2. Title
  3. Copyright
  4. Original Title
  5. Original Copyright
  6. Contents
  7. Acknowledgments
  8. Introduction: Two Revolutions—Cognitive and Probabilistic
  9. 1. The Inference Revolution
  10. 2. Detection and Discrimination: From Thresholds to Statistical Inference
  11. 3. Perception: From Unconscious Inference to Hypothesis Testing
  12. 4. Memory: From Association to Decision Making
  13. 5. Thinking: From Insight to Intuitive Statistics
  14. 6. Conclusions
  15. References
  16. Author Index
  17. Subject Index