Cognition and Social Behavior
eBook - ePub

Cognition and Social Behavior

  1. 290 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Cognition and Social Behavior

About this book

First published in 1976. This volume presents the collected papers of the Eleventh Annual Symposium on Cognition, held at Carnegie-Mellon University in April, 1975. These papers are unique in the history of these symposia for their orientation toward the study of social behavior. This symposium brings together the two fields of social psychology and cognitive psychology in response to a growing desire among many social psychologists to seek out or develop a more systematic body of theory, and a corresponding desire among many cognitive psychologists to study the everyday affairs of people outside the laboratory.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Cognition and Social Behavior by John S. Carroll,John W. Payne in PDF and/or ePUB format, as well as other popular books in Psicologia & Psicologia applicata. We have over one million books available in our catalogue for you to explore.

Information

Part I
Developing a Cognitive Social Psychology

The chapters in this section present viewpoints of a theoretical or paradigmatic nature outlining possible approaches to a cognitive social psychology—the study of social behavior based solidly on knowledge of the processes of human thinking.
Dawes (Chapter 1) examines how judgments about people rest on our inflated belief in our cognitive abilities and the accompanying belief that errors are motivational in origin and demonstrates this in diverse areas, including graduate student admissions and attitudes toward the Viet Nam War.
Carroll and Payne (Chapter 2) utilize the parole decision as a setting in which to apply recent theoretical formulations in social psychology and information-processing psychology.
Abeison (Chapter 3) presents a broad new theory of attitudes and behaviors based on the concept of "scripts," which are expected sequences of events of varying abstractness used to evaluate, predict, and generate specific instances.
Schmidt (Chapter 4) develops a formal model of the way people determine the causes of social events, the attribution process, through computer simulation.
Taylor (Chapter 5) contrasts the paradigms of social psychology and cognitive psychology and examines the problems and prospects of a cognitive social psychology.

1
Shallow Psychology

Robyri M. Dawes
University of Oregon
and
Oregon Research Institute

Philosophical Background

Although the overall philosophies of Plato (1942), Aristotle (1934), and Freud (1952) were different in many ways, they had at least one view in common: the human soul—or psyche—or mind—was viewed hierarchically; the bottom levels of this hierarchy corresponded to "lower" or vegetative or animal functions, whereas the highest level corresponded to the uniquely human reasoning capacity—or conscious part of the ego. For Plato, the levels were animal, spirited, and rational. For Aristotle, the levels were vegetative, animal, and rational. And for Freud they were the id (wholly unconscious), the unconscious parts of the superego and ego, and the conscious part of the superego and ego.
For all three philosophers, inter- and intrapersonal dysfunction was to be explained in terms of the "interference" of a lower level with the functioning of a higher level. For Plato, for example, cowardice in battle resulted from interference of the animal function of survival with the spirited function of conflict, whereas stupid decisions among philosopher-kings might result from spirited ambition interfering with rationality (hence they could not own property). For the Catholic Church (heavily influenced by Aristotle), dysfunction resulted from the basic "desires of the flesh" interfering with higher spirituality and reason (St. Augustine, 1953). Freud, whom some Christian theologians have regarded as something of an anti-Christ, has reinforced this viewpoint; the whole concept of "depth" psychology is that something comes from deep within the id or nearby—either an unconscious need or equally unconscious defense mechanisms against it—to mess up our ordinary pursuit of life, liberty, and happiness.
The recent "cognitive revolution" (Littman, 1969) has challenged the assumption that cognitive dysfunction is necessarily due to interference from noncognitive sources. People's cognitive capacities are limited (Fitts & Posner, 1967; Slovic, 1974); rationality is "bounded" (Simon, 1957); just as the psychoanalytic theorists have talked about nonoptimal functioning arising from "psychic economics," it is now legitimate to talk about such malfunctioning arising from "mental economics" (Abelson, 1974).
Perhaps the greatest contribution of "information theory" to psychology was not that it presented a precise quantitative measure of "information"—which never led to much (Coombs, Dawes, & Tversky, 1970)—but that it allowed psychologists to think in terms of quantity of thought in a behavioristic Zeitgeist in which such concepts as "mental effort" were derogated. Indeed, some of those who first discuss "information" as a technical concept now talk about "simple representation" or "Pragnanz" in broader terms (Attneave, 1959,1974), whereas the early findings that people are "limited information processors" in a technical sense (Miller, 1956) have led to the general conclusion that we are "limited." These limitations do not, however, come from deep within the id—but from the "mind of man" (Slovic, 1974). In retrospect, it is somewhat surprising that we psychologists have not emphasized such limitations earlier— because so many of our disappointing students crump out or flunk out partly because they are unable or unwilling to put forth the mental effort to understand our not too difficult field. (For years, however, we have eschewed the idea of cognitive limitations by sending such people to psychotherapists or counselors, on the grounds that although their "heads" have needed examining, the real source of the problem has lain elsewhere. Cognitive incapacity was assumed to result from mental illness, which was emotional, not mental.)
I believe that it is not necessary for me to recount for this audience the many cognitive limitations that have been catalogued and investigated in the past several years. These range from an inability to integrate information (Einhorn, 1972; Hoffman, Slovic, & Rorer, 1968) to systematic biases in estimating probability (Tversky & Kahneman, 1974). In fact, there is some evidence that people cannot even keep two distinct "analyzable" dimensions in mind at the same time (Shepard, 1964), especially if they are asked to make judgments in which information about one of these dimensions may be missing (Slovic & MacPhillamy, 1974).
What I should like to do instead is to discuss two areas in which the belief that human cognition is sacrosanct and that dysfunction must be explained in noncognitive (i.e., motivational) terms may have led to an important misunderstanding and counterproductive, "irrational," behavior. I emphasize the term "may" because I am talking about my own observations about these areas and not about any systematic, hypothesis-based experiments.

The Assessment of Human Potential

One of the first areas to be investigated by clinical psychologists, as the profession grew rapidly after World War II, was the degree to which human judgment could be used in the prediction of such variables as patient responses to treatment, recidivism, or academic success (Sarbin, 1943). What could such judgment add to predictions that could be made on a purely statistical basis by, for example, developing linear regression equations?
In the early 1950s, Meehl (1954) reviewed approximately 20 studies in which actuarial methods were pitted against the judgments of the clinician; in all cases the actuarial method won the contest or the two methods tied. Since the publication of Meehl's book, there has been a plethora of additional studies directed toward the question of whether clinical judgment is inferior to statistical prediction (Sawyer, 1966) and some of these studies have been quite extensive (Goldberg, 1965). Meehl (1965) was able to conclude, however, some 10 years after his book was published, that there was only a single example in the literature showing clinical judgment to be superior, and this conclusion was immediately disputed by Goldberg (1968) on the grounds that even that example did not show such superiority. I know of no subsequent examples, following the customary rules of the game, that have purported to show the superiority of clinical judgment.
The first of these rules is that the validity of the statistical prediction versus the clinical judgment both be evaluated by correlations between predicted and obtained scores on some measurable criteria. Although the nature of such criteria has come under attack (Holt, 1970), there is no reason to believe clinical intuition to be superior at predicting some unmeasurable criterion (usually "long range"), or that a correlation coefficient is not a reasonable, although flawed, measure of predictive accuracy. The second rule is that both the clinical prediction and those of the statistical model be made on the basis of the same codable input. But while the clinician may have access to variables that cannot be coded without his or her presence—for example, feelings of liking or disliking a patient or potential graduate student—there is no reason to believe that such variables cannot be coded. For a fuller discussion of the findings and the limitations, see Dawes and Corrigan (1974, pp. 97 and 98).
What effect did all this research have on the actual practice of clinical psychology? Almost zilch. Clinicians continue to give Rorschachs and TATs, to interpret statistically unreliable differences on subtests of the WAIS with abandon, and to attempt clinical integration of the data. The belief that clinicians somehow can do better than a statistical model, can integrate the information from such diverse sources into a reasonable picture of their clients, persists despite lack of supporting evidence.
More recently it has turned out that optimal statistical models are not the only ones that outperformed clinical intuition. In a business context Bowman (1963) and in psychological judgment contexts Goldberg (1970), Dawes (1971), and Wiggins and Kohen (1971) have suggested that models based on the clinicians' judgment could outperform the clinician. That is, if a "paramorphic" (Hoffman, 1960, 1968) model of an expert judge can be built, there is the "intriguing possibility" (Yntema & Torgenson, 1961) that this model may in fact outperform the judge on whom it has been based. Empirical research overwhelmingly supported this "bootstrapping" idea. It was thought that bootstrapping worked because the model abstracted the implicit weighting of the clinician while doing away with the unreliability of the particular judgments. But Dawes and Corrigan (1974; see also Dawes, 1975b) pointed out that the superiority of such "bootstrapped" models over the clinical judge involved only one of two possible comparisons—that of the judge with his or her model. The other possible comparison involved that of the judge with any reasonable statistical models. Dawes and Corrigan formed random linear, models, in which the coefficients were chosen in the right direction but otherwise randomly. Such models also outperformed expert human judges in a wide variety of contexts. Once the predictor variables were scaled in such a way that higher values were related statistically to higher criterion values, the weights associated with these variables were not very important. Unit weights did even better than random weights, as follows from a simple mathematical inequality (Ghiselli, 1964; Dawes, 1970). As Dawes and Corrigan concluded (1974, p. 105), "The whole trick is to decide what variables to look at and then to know how to add."
This conclusion does not say a great deal for the human capability of intuitively integrating information from various sources to reach an accurate conclusion. But then the experimental work referenced earlier should have led us to expect that conclusion. Wilks, as far back as the late 1930s (Wilks, 1938) showed that various linear composites with weights in the same direction would correlate highly with each other. It follows that if linear composites with optimal—i.e., "proper"—weights outperform human intuition by a wide margin, then so many linear models with nonoptimal—i.e., "improper"—weights, provided they are in the appropriate direction. Recent work by Wainer (1976) and Einhorn and Hogarth (1975) has supported this conclusion. Dawes (1974) has maintained that the expertise of good judges lies not in integrating information but in knowing how to code the important variables, a conclusion reached earlier in the area of medical expertise (Einhorn, 1972, 1974) and chess expertise (deGroot, 1965; Simon & Chase, 1973).
What effects have these new findings had? I cannot be sure, but let me give you some anecdotes from the area in which I am very involved—that of admitting students to graduate school. Some four universities that I know of are now using linear composites, at least as an initial screening device. Many people concerned with graduate admissions express outrage over such a "dehumanizing" device. As one dean wrote, "the correlation of the linear composite with future faculty ratings is only .4, whereas that of the admissions committees' judgment correlates .2. Twice nothing is nothing." In response, I can only point out that 16% of the variance is better than 4% of the variance. To me, however, the fascinating part of this argument is the implicit assumption that that other 84% of the variance is predictable and that we can somehow predict it.
Now what are we dealing with? We are dealing with personality and intellectual characteristics of people who are about 20 years old, and what we are hoping to predict is some vague future criterion of professional success or self-actualization that could not be meaningfully assessed until at least 10 or 15 years later. Why are we so convinced that this prediction can be made at all? Surely, it is not necessary to read Ecclesiastes every night to understand the role of chance; nor is it necessary to reread Julius Caesar to understand that there is a tide in our affairs that must be taken at the crest or its momentum lost. Moreover, there are clearly positive feedback effects in professional development that exaggerate threshold phenomena. For example, once people are considered sufficiently "outstanding" that they are invited to outstanding institutions, they have outstanding colleagues with whom to interact—and excellence is exacerbated. This same problem occurs for those who do not quite reach such a threshold level. Not only do all these factors mitigate against successful long-range prediction, but studies of the success of such prediction are necessarily limited to those people accepted, with the incumbent problems of restriction of range and a negative covariance structure between predictors (Dawes, 1975a).
Consider now the variance that is predictable. What makes us think that we can do a better job of selection by spending 15 minutes looking at applicants' transcripts and reading their letters of recommendation, or by interviewing them for a half hour, than we can by adding together relevant (standardized) variables, such as undergraduate GPA, GRE score, and perhaps ratings of letters of recommendation. The most reasonable explanation to me lies in our overevaluation of our cognitive capacity. And it is really cognitive conceit. Consider, for example, what goes into a GPA. Because for most graduate applicants it is based on at least 3½ years of undergraduate study, it is a composite measure arising from a minimum of 28 courses and possibly, with the popularity of the quarter system, as many as 50. The evaluations in these courses are at least quasi-independent. For whereas some students of small colleges may bring their reputations with them, professors do not generally check on previous GPA before assigning a grade in a course (even though a good Bayesian may someday suggest such a procedure). Surely, not all these evaluations are systematically biased against independence and creativity. Yet you and I, looking at a folder or interviewing someone for a half hour, are supposed to be able to form a better impression than one based on 3½ years of the cumulative evaluations of 20-40 different professors. Moreover, as pointed out by Rorer (1972), what you and I are doing implies an ability to assess applicant characteristics that will predict future behavior differently from past behavior; otherwise, why not just use past behavior as the predictor? Those who decry the "dehumanization" of admitting people on the basis of past record are clearly implying such an ability. Finally, if we do wish to ignore GPA, it appears that the only reason for doing so is believing that the candidate is particularly brilliant even though his or her record may not show it. What better evidence for such brilliance can we have than a score on a carefully devised aptitude test? Do we really think we are better equipped to assess such aptitude than is the Educational Testing Service, whatever its faults (Brill, 1974; Dawes & Hyman, 1971)?
I am not saying that there are no important variables that are often left unassessed by GPA and GRE. What I am saying is that we are unable to assess them on the basis of the data typically present in application folders, or on the basis of interviews. As Goldberg (1968) has pointed out, the answer to the problem is research designed to assess the relevant dimensions (which should be combined in a mechanical form). I am therefore entirely in sympathy with Dalrymple's recent plea that new criteria for admissions to medical school be used "so that those students can be admitted whom one can safely predict will be able to learn and use the intellectual tools that an excellent physician must possess.. ." (Dalrymple, 1974, p. 186). The question is, how can one "safely predict" such use? My answer is to orient our efforts toward isolating and evaluating the personality dimensions that involve such use—not to continue kidding ourselves that we can do so intuitively, in a burst of cognitive insight or by abandoning reasonable criteria simply because they are flawed—e.g., ignoring GPA because some students with a 3.3 are better than are some with a 3.9. How are we to tell which?
Yet another argument I have encountered against the use of linear composites for admitting graduate students is that it is not fair to minority groups. Is vague judgment better? I agree with Darlington (1971) that the best way to make decisions about how to favor members of minority groups is to do so explicitly, by making group membership a factor in the linear composite. Such a fe...

Table of contents

  1. Cover
  2. Title
  3. Copyright
  4. Contents
  5. Preface
  6. PART I: Developing a Cognitive Social Psychology
  7. PART II: Cognitive Processes in the Perception of Self and Others
  8. PART III: Cognitive Processes and Social Decisions
  9. PART IV: The Unity of Cognitive and Social Psychology
  10. Subject Index