
- 502 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
About this book
This book grew out of a graduate course in cognitive organization and change that the author taught during his tenure at the University of Illinois at Chicago Circle. Two primary objectives of the course are reflected in this book: first, to provide a general conceptual framework for critically and systematically analyzing research and theory on attitude and opinion change; second, to stimulate research on fundamental problems, related to these phenomena, that are made salient as a result of this analysis. First published in 1974. Routledge is an imprint of Taylor & Francis, an informa company.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weâve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere â even offline. Perfect for commutes or when youâre on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Cognitive Organization and Change by R. S. Wyer, Jr. in PDF and/or ePUB format, as well as other popular books in Psychology & History & Theory in Psychology. We have over one million books available in our catalogue for you to explore.
Information
1
MAN AS AN INFORMATION PROCESSOR: AN APPROACH TO THE INTERPRETATION OF BEHAVIOR
This book is about the manner in which beliefs and attitudes are developed and modified. More fundamentally, it is concerned with how information about persons, objects, or events is used in arriving at and reporting judgments of these stimuli. This information may be contained in written or oral communications about the stimulus to be judged. It may also be provided by more subtle aspects of the situation in which the judgment is made. For example, a person who administers an opinion questionnaire may convey the impression that he personally agrees with the statements contained in it. Or, a person who offers someone money to perform a task may give the impression that he personally considers the task to be important. Such impressions, correct or not, may affect the judgments reported in a given situation.
Another source of information is a personâs behavior when in contact with the object to be judged. For example, a child who observes someone scream upon seeing a snake may infer that snakes are harmful. Studies of the acquisition of behavior and attitudes through observational learning (for a summary, see Bandura & Walters, 1963) demonstrate the potency of this information. In some circumstances, oneâs own behavior, like the behavior of others, may provide information about a stimulus. Bem (1967) has suggested that people typically do not make definite judgments of a stimulus until they are asked to do so; once they are asked, their own recent behavior toward the stimulus may provide cues as to what these judgments should be. Thus, if a person has picked up a snake and fondled it, and he is subsequently asked to judge the pleasantness of snakes, he may reason: âPersons who fondle snakes typically regard them as pleasant. I have fondled a snake; therefore, I must regard snakes as pleasant.â Bemâs general hypothesis can be applied in situations where the stimulus being judged is the judge himself. It suggests, for example, that a personâs judgment that he is not very hungry may at times be the result of a decision to go without lunch rather than a determinant of this decision.
The âjudgmentâ referred to above can also be of several types. One type of judgment is probabilistic; that is, one may estimate the likelihood that an object has a particular attribute, or that a certain statement about an object is true. Judgments may also be in terms of the amount of an attribute possessed by an object (for example, the intelligence of a college professor, or the favorableness of oneâs own feelings toward Communism). As we shall see, these two types of judgments are theoretically quite similar, and the processes underlying them may differ primarily in complexity. However, each has certain unique properties that make it useful in understanding particular judgmental phenomena.
I. MAN AS AN INFORMATION PROCESSOR
To convey the general philosophical approach to understanding cognitive processes we will take in this book, it may be helpful to draw an analogy between the human âinformation processorâ and the computer, its electronic counterpart. Each processor is capable of receiving information, operating upon it according to certain rules, storing the results of these operations in memory, altering the contents of certain areas of memory to which new information is relevant, and ultimately reporting the results of these operations in a form that is implicitly or explicitly specified by a âuser.â The user of a human information processor may be the investigator in a psychological experiment, the teacher of a large lecture course, or simply a participant in an informal social interaction.
Several aspects of an information processor must be understood if it is to be used effectively and if its output is to be correctly interpreted:
1. The structure and organization of memoryâthe laws that govern the organization and storage of information in memory and the relations among the contents of different memory locations.
2. Information acquisitionâthe rules governing the reception of information, or âinputsâ; limitations upon the amount of information that can be accommodated, and the rate at which it can be assimilated; successive vs. simultaneous input of information; time-sharing capabilities (that is, the capacity to receive and process two different and unrelated bodies of information simultaneously); priority decisions in times of overload.
3. Integration and processing of informationâthe logical and arithmetic operations used in generating solutions to problems of interest to the user; the processes of modifying the contents of memory locations in response to new information.
4. Language translation rulesâthe transformation of input information into âmachine language,â that is, into a form that can be processed by internal procedures available to the central processor; retranslation of problem solutions into a form that is understandable and acceptable to the user. (Since the form of the input information and the form of the output requested by the user may differ, the translation rules involved may also differ.)
The general analogy between electronic information processors and human processors seems appropriate. However, there is of course an important distinction. The rules and operations involved in the processing of information by a computer are all imposed by the designer of the machine. In the case of the human processor, these rules and operations are not known a priori. The objective of the psychologist is to discover the nature of these rules and operations and to use them to predict the output resulting from certain sets of input information.
In the preceding analogy, the role of motivation is well-hidden. One does not usually attribute âmotivationâ to a computer, at least as the term is commonly used (cf. Cofer & Appley, 1964). In one sense, however, computers are motivated; that is, they use information to attain certain specific objectives (generally, to solve particular problems of interest to the user). In this sense, the user gives motivation or incentive to the processor through the program he selects to analyze his data. In many instances, the objective of a human information processor (or subject, since he is the focus of attention throughout this book) is also specified by the user. In a psychological experiment, for example, the subject comes to the situation with the implicit expectation that his services are required to help the experimenter test a particular hypothesis, or to investigate a particular problem. While instructions are given concerning the general nature of the task to be performed, the specific problem to be solved (that is, the experimenterâs hypothesis) is often not stated, or at least is incompletely described. The subject must therefore use whatever additional information is provided him to interpret the nature of the problem and to determine which âprogramâ he should call from memory and use to generate an acceptable solution (in other words, what he should do in order to generate outputs that will adequately answer the question investigated by the experimenter). Then, he must report these solutions in a form that the user will regard as acceptable.
Readers will recognize this notion as a restatement of Orneâs (1962) hypothesis that subjects are influenced by demand characteristics of the experimental situation. It draws attention to the fact that subjects in an experiment may simply be trying to behave as they think they are expected to behave by the person with whom they are interacting. Although this conclusion has little theoretical significance, it is not without interest. It emphasizes that an understanding of a subjectâs responses in an experimental situation requires a careful analysis of (a) the information provided the subject, both directly or indirectly, about the experimental task and (b) the inferences the subject is likely to draw from information concerning the experimenterâs objectives and expectancies.
Of course, the behavior ostensibly considered most acceptable by an experimenter may not be the behavior he would actually like to have occur. For example, suppose an experimenter indicates that a test he is administering measures intellectual ability. The experimenter may be interested in selecting low-ability subjects to participate in a later experiment and thus would prefer subjects to generate âincorrectâ responses. However, in the absence of an explicit statement to the contrary, a subject is likely to infer that the experimenterâs objective is to see how many âcorrectâ answers he can produce and therefore will generate as many responses of this type as possible.
The preceding analysis is perhaps somewhat oversimplified, since it suggests that there is only one âuserâ of the human information processor at any given time. In fact, there may be several different users (for example, other participants in the experiment, or persons outside the experimental situation to whom the subject may communicate his activities and their results). If the outputs considered acceptable by these users differ, it could be difficult to predict a priori which user would determine the âprogramâ used by the subject, and thus which outputs will be generated. Fortunately, in practice this problem may not be as serious as it seems. Milgram (1965a, 1965b) has obtained impressive evidence to suggest that when the expectancies ostensibly held for a subject by an experimenter conflict with those presumably held for him by others outside the experimental situation, the former most often determine the subjectâs behavior. In these studies, unpaid volunteer subjects were asked to âteachâ a stooge to perform a task by administering a shock whenever the stooge made an error. The subjects used had no connection the university at which the study took place and were in no way responsible to the experimenter. As the experiment progressed, the âlearnerâ made frequent errors according to a prearranged schedule. After each error, the amount of shock administered was increased. (The stooge was placed in a room out of sight of the subject, and of course actually received no shocks at all.) Despite loud moaning and protesting by the stooge that he had a âheart condition,â many subjects continued to administer shocks up to the maximum level (450 volts) and required only mild verbal pressure by the experimenter to do so. In fact, the average amount of shock administered by a sample of 40 subjects was 368 volts. In contrast, when the experimental situation was described to control (nonparticipating) subjects, they generally regarded the experiment as âimmoralâ and predicted on the average that they would be willing to administer only 135 volts of shock. Similar predictions about subjectsâ behavior were made by a group of experienced clinical psychologists to whom the experiment was described (Milgram, 1965b). These data suggest that the tendency to comply with the wishes of the investigator in a psychological experiment is far stronger than researchers often assume. Since nonparticipants clearly disapproved of shocking the âlearner,â Milgramâs findings also suggest that behavior in an experiment is more influenced by the immediate demands of the situation than by expectancies held for subjects by others outside this situation.
II. GENERAL IMPLICATIONS FOR RESEARCH
The approach outlined above has some quite specific implications for research on attitude and belief processes, as we shall attempt to demonstrate in the chapters to follow. More generally, the orientation we are proposing focuses upon (a) the type of information provided the subject in a particular social situation about the objectives he is expected to attain and the means of attaining them, and (b)the subjectâs capacity to receive and process information in the situation. This orientation may potentially help to clarify many social psychological phenomena studied in the laboratory. To support this assertion, some examples may be helpful.
A. Experimenter Effects upon Information Processing
The point we have madeâthat subjects often use the information provided in an experiment to infer the nature of the experimenterâs expectations, and then respond in a way that conforms to these expectanciesâmay seem obvious. However, it is overlooked in many studies that are designed to investigate more central theoretical issues. Thus the results of these experiments are often difficult to interpret. To provide some examples of this, and also to show the distinction between an âinformation processingâ interpretation of behavior and alternative theoretical interpretations, let us consider some representative experiments in the area of attitude and opinion change. In each case, we will first describe relevant aspects of the experiment and the theoretical issue it was designed to investigate, and will then reinterpret the experiment and the results obtained from an information processing point of view.
1. Sampson and Insko (1964). Sampson and Insko designed an interesting study to test certain implications of cognitive-consistency theory. According to this theory, inconsistencies among beliefs and attitudes are aversive and therefore tend to be eliminated. In the situation studied by Sampson and Insko, a state of consistency was defined as one in which a subjectâs (Pâs) belief about an object (X) was either similar to that of another (O) who was liked, or dissimilar to that of a disliked O. They tested the hypothesis that when P receives information that his cognitions about O and X are inconsistent, he will change his belief about X to regain consistency.
To test their hypothesis, the authors constructed an interaction situation in which a confederate (O) behaved toward P in a way that was intended to induce P either to like him or to dislike him. Under Liking conditions, O behaved toward P in a friendly and cooperative manner on a group achievement task, attributed favorable characteristics to him, and presented himself as similar to P in social and cultural background and in general attitudes and values. Under Disliking conditions, O behaved uncooperatively on the group task, blamed P for the failure to complete it successfully, attributed negative characteristics to P, and presented himself as dissimilar to P in background, attitudes, and values. After the manipulation of liking had been administered, P and O were asked to make judgments of the distance moved by a point of light (X) in a standard autokinetic situation. This task was described as a projective test of personality; subjects were told that persons who made similar estimates of X were similar in their underlying personality structure. Upon learning of Pâs initial estimates, O made judgments that were either very similar to Pâs or very dissimilar to Pâs. As predicted, when P found that his judgments of X were different from those of a liked O, he changed them to make them more similar to Oâs; on the other hand, when P found that his judgments were the same as those of a disliked O, he changed them to make them different from Oâs.
Although the results of this study support Sampson and Inskoâs theoretical position, an alternative interpretation is possible. Assume that Pâs primary objective in the study was to respond in a manner expected of him by the experimenter, and that he used the information provided him to determine the nature of these expectancies. Note that the manipulation of Pâs liking for O, although it was undoubtedly successful, also provided ostensibly objective information to P that O was either similar to him in background and values (under Liking conditions) or dissimilar to him (under Disliking conditions). P was also told that the autokinetic task was a measure of similarity in personality. If P inferred that the experimenter regarded the instrument as valid, he may have responded in a way that the experimenter would consider âcorrect,â that is, in a way that ostensibly reflected the actual similarity between P and O, independently of Pâs personal liking for O. This would also account for the results obtained.
Here, and in the remaining studies we will discuss, it is important to bear in mind that the original interpretation of the results given by the authors may indeed be valid. The mere existence of an alternative explanation of a phenomenon obviously does not indicate that the original interpretation is incorrect. It is much easier to generate post hoc hypotheses for obtained results than to construct studies that successfully test the validity of a priori hypothesis. The studies being described here are presented primarily to demonstrate an information processing approach to understanding social phenomena and not to destroy other interpretations.
2. Walters, Marshall, and Shooter (1960). These authors also used an autokinetic task to test an hypothesis stemming from a social-learning formulation of behavior. When a subject has been deprived of social contact for a brief period of time, he learns relatively more quickly when social approval is given for a âcorrectâ response. This finding had originally been interpreted as evidence that deprivation of social stimulation increases the value of this type of stimulation, much as going without eating increases the reinforcement value of food (cf. Gewirtz & Baer, 1958). However, Walters et al. argued that the increased effectiveness of social reinforcement under deprivation conditions is not due to the absence of social contact per se, but rather is due to an increase in general arousal that often accompanies social isolation. This arousal increases sensitivity to cues elicited by the reinforcing agent, and therefore previously conditioned responses are made to these cues. Walters et al. hypothesized that if social deprivation and arousal were manipulated independently, only the latter variable would affect responsiveness to a social reinforcer.
In the experiment, college subjects, run individually, were initially exposed to the autokinetic situation and were asked to estimate the distance moved by the point of light over a series of trials. Then, subjects run under high arousal conditions were administered a test that purportedly measured their intelligence and general ability. Subjects run under low arousal conditions were asked to indicate their aesthetic preference for each of a set of stimuli and were told that there were no right or wrong answers. It was assumed that subjects would be more concerned about their performance in the first condition than in the second, and that therefore their arousal would be greater. To manipulate social deprivation, half of the subjects under each arousal condition performed the task in the presence of the experimenter, while the remaining subjects performed the task with the experimenter out of the room.
After the above test had been completed, the autokinetic task was readministered. Before beginning, each subject was told that his initial judgments had generally been too low, and that on the present series of trials the experimenter would indicate whether he had responded within an acceptable range of the correct judgment by saying âright.â The experimenter then began to âreinforceâ responses that were equal to or greater than the subjectâs largest response on previous trials, and this continued until a criterion of three consecutive responses as large as that on earlier trials was reached. As predicted, subjects under high arousal conditions reached criterion in fewer trials, and made generally larger estimates, than did subjects under low arousal conditions, while the manipulation of social isolation had no significant effects.
To interpret the study within an information-processing framework, consider what information is provided subjects concerning the experimenterâs objective. Subjects who are asked to take an intelligence test may infer that the experimenter is interested in measuring their ability to perform well, or to generate correct answers. They may assume that this objective also applies to the autokinetic task, during which the experimenter, in effect, gives them detailed information about how to generate ârightâ answers. In contrast, subjects who are initially administered the preference task are told explicitly that there are no right or wrong answers, and thus may infer that the experimenter is interested in their personal opinions, and not in the objective accuracy of their responses. If this inference is also assumed to generalize to the autokinetic situation, these subjects may be inclined to report their actual judgments of the distance moved by the light, regardless of information provided them concerning the âcorrectâ distance. To this ...
Table of contents
- Cover
- Half Title
- Title Page
- Copyright Page
- Table of Contents
- Preface
- 1 MAN AS AN INFORMATION PROCESSOR: AN APPROACH TO THE INTERPRETATION OF BEHAVIOR
- 2 THE NATURE OF COGNITIONS â JUDGMENTS, BELIEFS AND ATTITUDES
- 3 EFFECTS OF RESPONSE LANGUAGE ON THE INTERPRETATION OF BELIEFS AND ATTITUDES
- SECTION II: COGNITIVE ORGANIZATION
- SECTION III: THE RECEPTION, ACCEPTANCE AND INTEGRATION OF INFORMATION
- REFERENCES
- AUTHOR INDEX
- SUBJECT INDEX