I Background
There are many histories that might be written concerning the study of the "hidden costs" of reward. Social philosophers from Locke to Dewey have concerned themselves with the manner in which rewards and punishments may be used most effectively to motivate performance and shape behavior. Likewise, much of modern-day experimental psychology had its roots in early laboratory investigations of the effects of rewards and punishments on learning and performance.
The two papers in this introductory section, however, focus in some detail on the two experimental traditions that have most directly shaped the thoughts and research of the contributors to this volume.
McCullers (Chapter 1) traces the history of the study of detrimental effects of rewards on measures of performance and learning from its roots in the study of motivational processes in animals, and he describes the theoretical models offered to deal with evidence of detrimental effects of reinforcement procedures observed in that context. These early theoretical models are then contrasted with an operant approach in which reinforcers are defined empirically and hence, by definition, have positive effects on behavior. This approach, McCullers maintains, has helped to blind us to the possibility that there may be "hidden costs" to the use of rewards.
Kruglanski (Chapter 2) examines a quite different tradition concerned with the social-psychological distinction between compliance and internalization or intrinsic and extrinsic motivation. The significance and interpretation of detrimental effects of rewards or other extrinsic constraints are traced from their roots in the experimental social psychology tradition begun by Lewin and his students to their current incarnation in the area of attribution theory and related theoretical formulations.
1 Issues in Learning and Motivation
John C. McCullers
Oklahoma State University
Given the title and general theme of this volume, it is clear that the reader is being asked to consider the perhaps surprising notion that reward can have adverse effects on intrinsic motivation and objective task performance. Some evidence and argument in support of that idea is presented in the following chapters. In this chapter, we raise the question of whether and to what extent the idea that reward can have detrimental effects on motivation and performance is in conflict with existing theory.
We shall begin with a discussion of theoretical principles that might account for the detrimental effects of reward. From available alternatives, we have selected three possibilities: The Yerkes-Dodson law, the Hull-Spence theory, and reinforcement contrast phenomena. All three reflect different perspectives and involve different explanatory mechanisms. These three theoretical viewpoints have been around in psychology for years and are rather widely known. If traditional theories of learning and motivation contain the necessary mechanisms to account for some of the adverse effects of rewards, as we believe they do, we then are left with another question: In what sense should the idea that rewards can have adverse effects be considered at all surprising? The remainder of the chapter, following the discussion of theoretical mechanisms, is devoted to this second question.
Do Classical Theories of Learning and Motivation Provide for a Detrimental Effect of Reward?
As we review our three classic theoretical positions and how they might account for reward's detrimental effects, it may be helpful to consider also reward's general relation to motivation and behavior for each theory. This may help us to identify the boundary conditions of any detrimental effects of reward and clarify the circumstances under which reward would be expected to have an enhancing effect.
The Yerkes-Dodson Law
One of the earliest expressions of the relationship between motivation and performance is contained in the curvilinear or U-shaped function first observed by Yerkes and Dodson (1908), since known as the Yerkes-Dodson law. According to this "law," increasing the intensity or level of motivation will enhance performance up to a point; after that, further motivation will result in poorer performance. This relationship was found to hold for difficult tasks. With easy tasks, however, performance generally continues to improve with increasing motivation.
Most of the empirical support for the Yerkes-Dodson law has come from studies with animals where the experimental tasks, both easy and difficult, have been mainly discrimination-learning problems. Motivational level in these situations has been manipulated typically through variations in the amount of noxious stimulation (e.g., intensity of electric shock, seconds of air deprivation, and the like). The Yerkes-Dodson law tells us, for example, that rats should make fewer errors in learning a difficult discrimination under an intermediate level of aversive stimulation than under a low or high level.
The conceptual leap from rats to humans and from an induced aversive drive state to reward (particularly from drives that seem to threaten the organism's survival to the paltry sort of rewards that are typically dispensed in human research) may be more than many readers would care to make. Beyond that, there is the added problem that the research evidence, even with rats, does not lend itself to a clear-cut and unambiguous interpretation because of some methodological complications that need not concern us here.
If the Yerkes-Dodson principle had found no wider acceptance than in the animal-laboratory context in which it was formulated, we would hesitate to offer it here in connection with the present problem. That has not been the case, however. The notion that motivation should facilitate learning and performance only up to some optimal level (neither too low nor too high) has an intuitively logical and common-sense appeal about it. Perhaps for that reason, this concept has been employed widely in human social and developmental theories. Virtually all of the grand-scale, developmental theorists such as Freud, Piaget, Werner, and Lewin have incorporated this principle into their theories. Further, these theorists have not seemed particularly troubled about making the leap from animals to humans or from drives to incentives. For example, Lewin (1946) tells us that "increasing incentives favor the solution of detour and other intellectual problems only up to a certain intensity level. Above this level, however, increasing the forces to the goal makes the necessary restructurization more difficult [p 815]." Not only has this principle been used widely, it continues to do service in current theoretical efforts. For example, one of our present contributors, Edward L. Deci, in his recent volume on intrinsic motivation (Deci, 1975), suggests that "intrinsic motivation increases as the goal difficulty increases, up to some optimal level [p. 117]."
Taken in its most general sense, the Yerkes-Dodson law suggests that there is an optimal level of motivation for any task or activity and that the more complex or difficult the task, the lower the optimal level. The critical assumption seems to be that an activity that can be performed efficiently at an optimal level of motivation will become disintegrated under excessive motivation. Just why this should be the case is not clear, other than the rather vague implication that the answer lies somewhere in the organization and function of the nervous system. At a phenomenological level, the literature of animal psychology and human psychopathology provides many examples of the fact that ongoing normal activity can be dramatically disrupted under the stress of excessive motivation. The reader may wonder if such disruption could be explained more economically another way or whether rewards can produce this type of disruption. Nevertheless, so long as rewards can be considered to provide a source of motivation, it is difficult to escape the conclusion that rewards should have an influence on motivational level relative to the optimum.
If incentives are considered to influence behavior independently of other sources of motivation, then the Yerkes-Dodson principle would predict that an intermediate level of incentive should enhance performance on a complex task but that a high level should interfere with performance. On the other hand, if incentives merely provide one source of motivation that combines with other sources, then the addition of even a low level of incentive in a complex task might be enough to put total motivation beyond the optimal level. Either way, it seems clear that the Yerkes-Dodson law could predict a detrimental effect of reward in some situations.
Hull-Spence Theory
Clark L. Hull and his colleagues, notably Kenneth W. Spence, formulated some of the most elegant and sophisticated statements in psychology on the relationship of reward to behavior. Hull-Spence theory has relied upon simple, stimulus-response (S-R) mechanisms to explain behavior, with a careful distinction being maintained between learning and performance. Learning is seen as the elaboration and extension of innate, reflexive responses through the associative principles of classical conditioning. Performance, on the other hand, occurs as the joint result of learning and several nonassociative factors, chiefly motivation.
There are some important differences between Hull's statement of the theory in his Principles of Behavior (Hull, 1943) and his later revisions (Hull, 1951, 1952)—as well as between Hull and Spence (e.g., 1956)—concerning the theoretical conceptualization of the role of reward. Most of these differences are ignored for the present purpose. In later versions of the theory, reward performed several important functions. For example, reward may serve as reinforcement and thus influence learning directly through the development of habit strength (SHR). In the form of incentive motivation (K), the effects of rewards may combine with available habit strength to determine reaction potential (SER) and thereby influence performance. Rewards play a part in the development of secondary motivation and secondary reinforcement. Also, reward as incentive motivation plays an important role in the formation of the fractional anticipatory goal response (rg), the S-R equivalent of the concept of expectancy that serves to guide instrumental behavior. For present purposes, we need consider only the relationship between E, H, and K.
The equation, E = H x K, indicates that reaction potential results in part from the multiplicative combination of habit and incentive motivation. Given that performance is determined by the strength of E, any increase in either H or K will increase the value of E and thereby increase the likelihood of occurrence of a particular response. Learning or habit formation involves the strengthening of the associative bond between "S" and "R" in the S-R relationship. Habit strength develops as a positive growth function of the number of reinforced trials. Within the theory, learning (H) cannot occur without reinforcement, and reinforcement provides the only systematic influence in the development of habit strength. Similarly, incentive motivation (K) increases as a negatively accelerated function of the amount of reward. Up to some asymptotic maximum value, the greater the amount of reward, the greater the value of K.
Given these considerations, it may appear to readers unfamiliar with this line of theorizing that reward's only effect upon performance should be to enhance it. That is not the case, however. Similar to the Yerkes-Dodson principle, the Hull-Spence theory predicts an enhancing effect of reward (K) on performance in simple tasks but a detrimental effect in complex tasks. The reason is that K multiplies indiscriminately with all available habits or response tendencies of the organism. In simple situations, the desired or correct response tendencies would be dominant or most likely to occur. Indeed, simple or easy tasks may be defined as those in which correct responses have a ready availability and high probability of occurrence. Given that the subject is likely to make correct responses anyway, any increase in K serves merely to increase the strength of E for correct responses (Ec), thereby enhancing performance. With complex tasks, on the other hand, the desired responses are not as dominant initially as error tendencies. A predisposition to make more errors than correct responses is perhaps the defining characteristic of a complex or difficult task. In this situation, increasing K serves to increase the tendency to make errors (Ee) and thus lower the quality of performance. It is true that K combines with the H for correct response tendencies in complex tasks also and thereby increases the value of Ec. However, because of the multiplicative relationship between H and K, an increase in K would function to magnify the differences between Ee and Ec. This provides the basis for improved performance in simple tasks, where the desired responses are dominant, but for poorer performance in complex tasks where the dominant responses are incorrect.
The Hull-Spence theory thus makes the same empirical predictions as the Yerkes-Dodson law with respect to a detrimental effect of reward in complex tasks, ...