Social Psychology and the Unconscious
eBook - ePub

Social Psychology and the Unconscious

The Automaticity of Higher Mental Processes

  1. 352 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Social Psychology and the Unconscious

The Automaticity of Higher Mental Processes

About this book

Evidence is mounting that we are not as in control of our judgments and behavior as we think we are. Unconscious or 'automatic' forms of psychological and behavioral processes are those of which we tend to be unaware, that occur without our intention or consent, yet influence us on a daily basis in profound ways. Automatic processes influence our likes and dislikes for almost everything, as well as how we perceive other people, such as when we make stereotypic assumptions about someone based on their race or gender or social class. Even more strikingly, the latest research is showing that the aspects of life that are the richest experience and most important to us - such as emotions and our close relationships, as well as the pursuit of our important life tasks and goals - also have substantial unconscious components.

Social Psychology and the Unconscious: The Automaticity of Higher Mental Processes offers a state-of-the-art review of the evidence and theory supporting the existence and the significance of automatic processes in our daily lives, with chapters by the leading researchers in this field today, across a spectrum of psychological phenomena from emotions and motivations to social judgment and behavior.

The volume provides an introduction and overview of these now central topics to graduate students and researchers in social psychology and a range of allied disciplines with an interest in human behavior and the unconscious, such as cognitive psychology, philosophy of mind, political science, and business.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Social Psychology and the Unconscious by John A. Bargh in PDF and/or ePUB format, as well as other popular books in Psychology & History & Theory in Psychology. We have over one million books available in our catalogue for you to explore.

1

What is Automaticity? An Analysis of Its Component Features and Their Interrelations

AGNES MOORS and JAN DE HOUWER
The concept of automaticity is becoming increasingly important across nearly all subareas of psychology. Investigators who search for guidelines to asses the automatic nature of a certain task performance or process are faced with a multitude of views of automaticity. The variety of views is largely due to the topic studied (e.g., perception, memory, skill development, attention), the research paradigm employed (e.g., direct vs. indirect), the characteristics of the underlying information processing model (e.g., instance-based vs. rule-based), and the characteristics of the larger framework in which the information processing model fits (e.g., computational vs. connectionist). Many views define the concept of automaticity in terms of a number of features, but they differ with regard to the features they put most emphasis on, as well as with regard to the coherence they assume among the features. One contemporary account is the gradual and decompositional view, which proposes to investigate each automaticity feature separately and determine the degree to which it is present. In this chapter, we engage in a detailed analysis of the most important features in order to examine whether they can indeed be regarded as gradual, and whether they can be conceptually and logically separated.
Before embarking on a discussion of some prominent views of automaticity, we must specify what the word automatic can be a predicate of and stipulate the research questions that prevail in the study of automaticity. The word automatic can be used to describe performances or effects, which are observable, or to describe the processes underlying the performance, which are not observable and hence need to be inferred. We take it as a general rule that when the performance is classified as automatic, so can the process underlying the performance. It may be good to keep in mind that processes can be described at different levels of analysis. Marr (1982), for example, distinguished between three levels of process understanding: the computational level, the algorithmic level, and the hardware level. The computational level articulates the functional relation between input and output, whereas the algorithmic level contains information about the formal properties of the processes involved in transforming input into output (i.e., what is actually in the black box). For example, the higher-level functional process of stimulus evaluation (i.e., is a stimulus good or bad) may be further specified at the algorithmic level as a process of direct memory retrieval (activation of a stored valence label in memory) or as a process of algorithm computation (comparison between a desired and an actual state). The hardware level is concerned with the physical implementation of processes in the brain. Other theorists (e.g., Anderson, 1987; Pylyshyn, 1984) have proposed a different number of levels and have placed the boundaries between the levels at somewhat different heights, but the important lesson is that processes described at higher levels can be explained by processes described at lower levels. Similarly, performances or effects can be explained by higher-level processes and further down by lower-level processes. It should be clear that we use the term explanation here in the sense of an explanation that specifies the underlying mechanism.
Now that we have clarified our use of the terms performance, process, and explanation, we can distinguish between two types of research purposes that automaticity researchers have been concerned with. A first purpose is to diagnose the automatic nature of a task performance or a higher-level process. For example, skill-development researchers have tried to assess whether the performance on certain tasks has reached an automatic level. Affective priming researchers have tried to find out whether the affective priming effect (i.e., the fact that responses to a target are faster when preceded by a prime with the same valence; see Fazio, 2001, for a review) occurs automatically, and, by inference, whether the higher-level process of the evaluation of the primes can take place automatically. A second purpose is to explain automaticity in general. This purpose amounts to investigating which type of lower-level processes can lead to automatic performance or automatic higher-level processes, and can be rephrased as the purpose to diagnose the automatic nature of these lower-level processes. Researchers may manipulate the lower-level process that participants will use for a certain task, and they may then assess which type of lower-level process leads to automatic performance. For example, affective priming studies may be designed to encourage the retrieval of a valence label from memory (e.g., Fazio, Sanbonmatsu, Powell, & Kardes, 1986) or, alternatively, the comparison between a desired and an actual state (e.g., Moors, De Houwer, & Eelen, 2004). It may then be assessed which of these lower-level processes produces automatic affective priming effects. Both research purposes can be rephrased as being about diagnosis: the first concerns the diagnosis of the automatic nature of a task performance or a higher-level process, and the second concerns the diagnosis of the automatic nature of a lower-level process. The diagnosis of a phenomenon is usually closely related to the way in which it is defined. We therefore start with an overview of different views (of the definition) of automaticity.

VIEWS OF AUTOMATICITY

Most theories of automaticity are feature-based, defining automaticity in terms of one or more features. Different feature-based theories vary with regard to the features they select as the crucial ones, as well as with regard to the degree of coherence they assume among the features. Another proposal is to define and diagnose automaticity, not in terms of features, but in terms of the underlying process. This view is called the mechanism-based approach.

Feature-based Approach

Features have been clustered into two modes, into three (or more) modes, or they have been considered to be entirely independent. The first view to be discussed is the dual mode view. Although this view is now largely abandoned, several researchers seem to still implicitly rely on it, as is sometimes revealed in unguarded moments.
Dual Mode View According to a dual mode view, there are two modes of processing that are each characterized by a fixed set of features. Automatic processes are characterized as unintentional, unconscious, uncontrollable, efficient, and fast. Nonautomatic processes are supposed to possess all the opposite features. The dual mode view is also an all-or-none view. Such a view combines the idea of a perfect correlation among the features of each mode with the idea that both modes are mutually exclusive and that they exhaust the universe of possible cognitive processes. In this way, any performance or process holds all of the features of one, and none of the features of the other mode. According to this all-or-none view, one can diagnose a performance or process as automatic (or nonautomatic) by assessing the presence of one feature belonging to the automatic (or the nonautomatic) mode. The presence of the remaining features of that mode can then be logically inferred (Bargh, 1992).
Two historical research traditions have been appointed as responsible for the creation of the dual mode view (Bargh, 1996; Bargh & Chartrand, 2000; Wegner & Bargh 1998). The first tradition developed from the single capacity view of attention (Shiffrin & Schneider, 1977), a view that originated from early research on skill development (Bryan & Harter, 1899) and dual tasks (Solomons & Stein, 1896; see review by Shiffrin, 1988). This tradition was also inspired by the writings of James (1890) and Jastrow (1906) on habit formation. The second research tradition grew out of the New Look program in perception research (e.g., Bruner, 1957).
Capacity view The single capacity view of attention regarded attention as a limited amount of energy that can flexibly be allocated to different stages of processing (e.g., Kahneman, 1973). It was assumed that the early stages in the processing sequence (sensory analysis) require less attention (i.e., pre-attentive) than the later stages. In virtue of extensive (consistent) practice, processes that are initially capacity-demanding can progressively develop to operate without attention. Automatic processing was defined as processing without or with minimal attention (i.e., efficient), and automatization was defined as the gradual withdrawal of attention involvement due to practice (Hasher & Zacks, 1979; Posner & Snyder, 1975a, b; Shiffrin & Schneider, 1977). We call this view the capacity view of automaticity.
Initially, proponents of the capacity view conceived of the criterion of attentional requirements as a continuum (see Hasher & Zacks, 1979), with automatic processes depleting only a minimal amount (efficient) and nonautomatic processes drawing on a substantial amount of attentional capacity (nonefficient). Other functional feature pairs (such as unintentional vs. intentional, unconscious vs. conscious, uncontrollable vs. controllable, fast vs. slow, parallel vs. serial) were derived from the feature pair efficient–nonefficient, and eventually this led to the view that automatic and nonautomatic processes represent two opposite modes of processing, each characterized by a fixed set of features. In this way, the initial conception of automaticity as a continuum developed into a dichotomous view.
New Look The second research tradition that contributed to a dual mode view was the New Look movement in perception (Bruner, 1957). The original focus in this tradition was on the constructivist nature of perception, that is, the interaction between person variables (needs, expectancies, values, knowledge) and information available in the environment (Bartlett, 1932). Because of the hidden character of the influence of person variables on perception, the focus shifted toward unconscious perception (Erdelyi, 1992). The dual mode models that developed from this research tradition put most emphasis on the features unconscious and unintentional (e.g., Fodor, 1983).
To summarize, the first research tradition took the feature pair efficient–nonefficient as a starting point and added other feature pairs to this distinction. The second research tradition added other feature pairs to a dual mode model based on the feature pair conscious–unconscious. This different emphasis on individual features of automaticity stems for a large part from the type of research paradigms employed in both traditions. For example, Shiffrin and Schneider (1977) used search tasks, which are a special type of skill development task. In these tasks, participants are explicitly instructed to engage in the process under study (e.g., to detect a target). After extended (consistent) practice, the process becomes impervious to variations in task load, and this is taken as an indication that it has become efficient (Shiffrin & Schneider, 1977). Investigators from the New Look tradition used tasks in which participants were instructed to engage in a process that is different from the process under study or in which the process under study was concealed. For example, Bruner and Goodman (1947) asked participants to draw the physical size of coins and equally sized discs and they observed an overestimation of the size of coins compared to discs. This effect was larger for coins with a higher monetary value and for poor participants. The fact that participants processed the monetary value of coins even when they were not instructed to do so, yields support for the unintentional nature of this processing. In other studies, tachistoscopic presentations were used to establish thresholds for conscious recognition of desired and undesired words (e.g., Postman, Bruner, & McGinnies, 1948).
Despite these differences, researchers of both traditions have proposed very similar dual mode models for information processing in which perception, attention, and memory are intertwined. Both consider information processing as based on the activation of a sequence of nodes from long-term memory. Nodes can be activated in two distinct ways: by stimulus input alone, in which case activation is spread further to connected nodes with little attentional demand; or by a non-automatic process, through the allocation of attention. The dual mode model developed by these early researchers appears to be a tenacious one. Despite criticism and recent evolutions, it is still popular in various domains of research (e.g., emotion, social cognition). One of the reasons why the dual mode view seems so difficult to shake off is that it is strongly ingrained in the classic, computational metaphor of cognition on which most dual mode models rest. The computational metaphor of cognition can thus be considered as a third factor that is responsible for the creation and persistence of the dual mode view.
Computational framework There are several elements in the computational framework that render it more susceptible to a dual mode view than the connectionist framework (see also Cleeremans, 1997). First, in computational models, knowledge is represented symbolically in long-term memory. This knowledge may consist of data (concepts, exemplars) stored in declarative memory, and programs (procedures, rules, algorithms) stored in procedural memory. Processing amounts to combining data and programs in working memory. The system takes data as its input, runs a program on it, and produces new data as its output. One commonly voiced concern is that symbolic representations, because they are abstract, need an external interpreter in order to inject them with meaning (i.e., symbol grounding problem; Searle, 1992). Further, the conception of processing as symbol manipulation presupposes an external manipulator or processor. In most classic models, the interpreter (in charge with providing conscious meaning) and the manipulator (or controller) are united in one single entity, for example, a central executive (Baddeley, 1986). This central executive is also charged with directing the attention window. Hence, it is no surprise to find that the features conscious, controlled, and nonefficient are often mentioned in the same breath.
Not many classic model...

Table of contents

  1. Cover
  2. Half Title
  3. Full Title
  4. Copyright
  5. Contents
  6. About the Editor
  7. Contributors
  8. Introduction
  9. 1 What is Automaticity? An Analysis of Its Component Features and Their Interrelations
  10. 2 Effects of Priming and Perception on Social Behavior and Goal Pursuit
  11. 3 Automaticity in Close Relationships
  12. 4 On the Automaticity of Emotion
  13. 5 The Automaticity of Evaluation
  14. 6 The Implicit Association Test at Age 7: A Methodological and Conceptual Review
  15. 7 Automatic and Controlled Components of Social Cognition: A Process Dissociation Approach
  16. Author Index
  17. Subject Index