
eBook - ePub
The Rest Principle
A Neurophysiological Theory of Behavior
- 240 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
About this book
First published in 1982. The human brain is the most complex object on Earth that can be studied scientifically: a collection of over 100 billion neurons squeezed into a space about the size of a grapefruit, which somehow is able to control all that you feel, do, and know. There still is little understanding of the most important and interesting functions of the brain, such as what really happens up there when you learn something, when you are thinking, or when you are feeling happy. In this book the author attempts to organize nearly the entire field of psychology within a single new theory, based upon only one very simple assumption about neuronal functioning.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weāve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere ā even offline. Perfect for commutes or when youāre on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access The Rest Principle by J. D. Sinclair,John David Sinclair in PDF and/or ePUB format, as well as other popular books in Psychology & History & Theory in Psychology. We have over one million books available in our catalogue for you to explore.
Information
1 Introduction
The human brain is the most complex object on Earth that can be studied scientifically: a collection of over 100 billion neurons squeezed into a space about the size of a grapefruit, which somehow is able to control all that you feel, do, and know. We understand rather well how the individual neurons function, how they carry information from one place to another, and how they stimulate or inhibit one another. We even have some idea of how they can analyze the incoming information and how they can organize the production of a response. In between the analyzed stimuli and the response automatons, however, is an area of profound mystery. Although prodigious amounts of data have been collected, there still is little understanding of the most important and interesting functions of the brain, such as what really happens up there when you learn something, when you are thinking, or when you are feeling happy.
This lack of understanding has been caused partly by an antitheoretical attitude in psychology in the last few decades (Royce, 1976). Prior to that, the development of general theories of learning had held an important position in the field. Perhaps because these efforts were not successful, succeeding generations of psychologists came to feel that little was to be gained by trying to make general theories. Instead, they collected information that was, at most, organized into models encompassing only small limited amounts of data. In the process they have made valuable contributions, and our knowledge about behavior has increased tremendously. Meanwhile, however, the fundamental questions have generally been avoided.
For instance, how does reinforcement really work? Perhaps I should define terms here. Positive reinforcement is anything that increases the probability of the preceding responseās being emitted again in a similar situation. A hungry rat who receives food after pressing a lever is more likely to press the lever again; the food is, therefore, positive reinforcement.
The response was made in the first place because some collection of neurons fired. These neurons formed the connection between the stimuli that were present at that time and the particular response output. It is generally assumed that the response becomes more likely after reinforcement because the synapses in the intervening pathway were strengthened. But how? How can the presence of food in the mouth or in the stomach increase the connectedness of some set of neurons in the brain?
To make matters more complicated, it is now clear that under the right conditions practically anything can be a reinforcer. This finding is primarily what killed many of the older theories of learning. They had postulated that only things that reduced drives were reinforcers. This is certainly true in the case of the ratās learning to run in order to obtain food. The food reduced the hunger drive and therefore (although it was not stated how) managed to reinforce the running. It has since been shown, however, that rats will eat, even when they are not hungry, in order to have access to a wheel in which to run (Premack, 1962). Is there also a drive to run? Animals will also work in order to see things (Butler, 1953, 1958; Butler & Harlow, 1954), to reach a particular level of stimulation (Girdner, 1953; Harwitz, 1956), or to change the level or type of stimulation (Barnes & Kish, 1958; Kish, 1955; Roberts, Marx & Collier, 1958). Eventually the theorists had to postulate curiosity drives, drives for light, sound, and so on, and drives to change the amount of stimulation. The circularity of this argument should be obvious: Reinforcement was anything that reduced drives, and drives were anything that produced reinforcement when they were reduced. So the question of what is reinforcement was more or less put aside, whereas the question of how it really worked to strengthen connections was not usually considered.
Nevertheless, it had become clear that there were very many types of reinforcers. Moreover, each of these could apparently reinforce nearly all responses. In other words, the firing of practically any set of neurons could, under the right conditions, reinforce almost any other set.
The problem of nonspecificity becomes even more apparent in human learning. I just picked two words randomly out of the dictionary: worm and luster. Having read these, you already have developed some association between them. If you now free-associated to the word worm, the probability that you would say luster has increased. This dictionary has some 100,000 words, so there were about 5 billion pairs of words that could have been selected and that you could have associated. Whatever reinforced your learning of worm-luster,1 it must have been able to reach all 5 billion of these possible connections.
Previous learning theories generally can be divided into three categories on the basis of how they assume reinforcement strengthens the connections. The first category includes those theories that do not attempt to answer this question. They may specify what constitutes reinforcement (e.g., drive reduction [Hull, 1943], moderate increases in stimulus complexity [Dember & Earl, 1957], or arousal reduction [Berlyne, 1960]), but they do not specify a mechanism by which these factors manage to reach the pathway that has just fired and then strengthen the connections in it.
The second category contains those theories that assume that reinforcement occurs as a result of the activity of some specific mechanism external to the neurons that produced the response. For instance, Olds and Olds (1965) speculated that there was a network of neurons with reinforcing synapses located in the vicinity of all other synapses that could be strengthened. The activation of these reinforcing synapses somehow strengthens the other synapses nearby that had just fired. A similar idea appears in the hypotheses that particular transmitter substances such as norepinephrine or dopamine may cause reinforcement (Rolls, 1975, pp. 73ā89). Both of these suggestions have difficulty accounting for the nonspecificity of learning. They would require an input from nearly all units in the brain into this reinforcing network and a direct output from it to all synapses that can be strengthened. No known system in the brain has such widespread direct ramifications. The norepinephrine and dopamine synapses, for instance, constitute only a small fraction of the total number of synapses.
The third category of theories includes those that speculate that connections become stronger because of being used. The more a connection is used, the easier it will be to traverse it in the future. This is what I call the use principle. It is found in a very wide range of theories (e.g., Guthrie, 1935; Hebb, 1949), but particularly in those proposed by researchers in human verbal learning and classical conditioning (e.g., Konorski, 1967; Pavlov, 1927). These are fields in which the nonspecificity of learning is most obvious and reinforcement is least obvious. As might be expected, theories employing the use principle can easily account for the nonspecificity of learning, but generally have difficulty explaining why positive reinforcement increases and negative reinforcement decreases the probability of the response being emitted again.
The use principle has become very deeply embedded in our thinking. It can also be seen as an implicit assumption in the thinking of some modern neurochemists and neurophysiologists, who often seem to believe that it is proved by behavioral results and that their task is to find the mechanisms causing it (Nathanson & Greengard, 1977). As we shall see in Chapter 6, this must be a most frustrating task for them.
The use principle seems intuitively obvious. If you want to learn a list of words, you say it over and over again, and eventually you know it. Although the introduction of the law of effectāthat the consequences of an act and not mere repetition of the act determine whether it will be learnedāseemed to contradict the use principle, various ways have been found to reconcile the use principle and the law of effect. At present, the use principle is probably accepted by more workers in the field than any other specific mechanism for strengthening neuronal connections.
This acceptance of the use principle has, I feel, been unfortunate, because it almost certainly is wrong.
As pointed out in the next few chapters, the use principle upon close examination produces some impossible conclusions. The physiological evidence also suggests that the use principle is not only wrong but also backward. In other words, synapses that are fired continually not only do not become stronger but actually become weaker.
The theory that I present here does not fall into any of the three categories mentioned. It does specify a process by which connections become stronger. This process does not depend upon any specific set of neurons external to those involved in the pathway to be reinforced but rather, like the use principle, is assumed to be a property of all neurons. Instead of the use principle, however, it is based on its antithesis, which I call the rest principle.
The rest principle states that connections within a pathway of neurons become stronger only if the neurons rest after firing and that the connections will get weaker if the neurons are fired repeatedly without rest.
The physiological evidence in favor of the rest principle is already quite strong and growing rapidly, as discussed in Chapter 6. Many of the phenomena that demonstrate the rest principle have been known for a long time and often have been treated as nuisances, perhaps partly because they did not fit in with a conceptualization based upon the use principle. Phenomena illustrating the increase in strength after rest include: (1) postinhibitory sensitization or rebound, in which neurons that have been made to rest by inhibition become easier to fire or spontaneously more active than normal when they are released from inhibition (Kuffler, 1953; Lake & Jordan, 1974); and (2) denervation supersensitivity, in which neurons (and muscles and glands) that have been allowed to rest by removal of input develop more receptors and become easier to fire (Cannon & Rosenblueth, 1949; Costentin et al., 1977; Creese et al., 1977; Feltz & De Champlain, 1972; Sporn, Harden, Wolfe, & Malinoff, 1976; Vetulani, Stawarz, & Sulser, 1976). Phenomena demonstrating the decrease in strength after continual firing include: (1) habituation (Cooper, 1971; Hiude, 1970; Peckham & Peckham, 1887; Thompson & Spenser, 1966); (2) pharmacological desensitization (Changeux, 1975; Curtis & Ryall, 1966; Katz & Thesleff, 1957; Magavanik & Vyskocil, 1973; York, 1970); and (3) denervation subsensitivity (which occurs when the degenerating neuron is flooding the recipient organ with very large amounts of transmitterāthe supersensitivity begins developing only after the presynaptic neuron is apparently depleted of transmitter) (Deguchi & Axelrod, 1973; Emmelin, 1964a, 1964b; Reas & Trendelenburg, 1967; Trendelenburg, Maxwell, & Pluchino, 1970). Presynaptic negative feedback, in which the amount of transmitter released is decreased after large amounts have been released and increased if little or no transmitter has been released, appears to contribute both to the weakening of connections with continual use and to their strengthening after rest (StjƤrne, 1975).
I remember that many years ago, when I took an introductory course in computer programming, there was one cardinal rule that could not be broken without dire consequences: āDonāt program an endless circle into the computer!ā If you did, the computer would get stuck until one of its disciples pulled the plug.
The use principle can easily lead to an endless circle, because it involves positive feedback (i.e., the more a system is used, the stronger it gets, thus increasing the probability that it will be used again). The rest principle, on the other hand, involves negative feedback (i.e., systems that are used too much become weaker and therefore are less likely to be used in the future). Consequently, as shown in the next chapter, animals whose nervous systems work according to the use principle are likely to get stuck on one response, but animals whose nervous systems work according to the rest principle do not usually get stuck. I think evolution would have imposed even stronger penalties for endless circles than those we faced in the computer course.
This is only one of the advantages of the rest principle. Another is that it has a built-in process for weakening as well as strengthening connections; in contrast, the use principle allows only increases. This property has made it difficult for theories based on the use principle to deal with punishment and extinction, neither of which presents a problem for the rest principle. Some theorists (Hebb, 1949) have added a disuse principle to the use principle (i.e., connections that are not used become weaker). This addition has also produced problems and has not been generally accepted by use principle theorists.
In Chapter 5, it is shown how a nervous system employing the rest principle would automatically develop reciprocal lateral inhibition and self-inhibition, just as it seems our nervous systems do. The combination of the rest principle and these inhibitory connections that would develop from it are then shown to account for a wide variety of the findings presently known to psychology and neurology.
So far I have been emphasizing the differences between the present theory and previous ones. There are also, however, many similarities that should be pointed out.
My idea that reinforcement occurs because pathways that have just been active are allowed or forced to rest is somewhat similar to Guthrieās (1935) ātrial terminatorā hypothesis: Reinforcement is effective because it changes the stimulus input and thus lessens the chances of interference developing with the response that produced the reinforcement. The major difference is that I postulate an active mechanism for strengthening the last-used pathway during the pause.
Konorskiās (1967) proposal for what constitutes reinforcement is also somewhat similar to mine. Like Guthrie, he employs the use principle, but with the modification that associations are formed only when the organism is in a state of arousal. Physiological drives are able to produce this arousal, and therefore all acts produced by a hungry animal, for instance, will be associated with hunger. In order to eliminate unsuccessful responses, Konorski postulates that āretroactive inhibitionā rather than interference suppresses the movements that do not interrupt the drive and thus do not reduce the motor arousal. The successful response, however, reduces arousal, is not subjected to retroactive inhibition, and therefore remains strong. Food in the mouth rather than, for instance, an increase in the level of blood glucose is seen as the primary factor for reducing hunger drive.
The present theory, although based on the rest principle, also predicts that arousal is important, although not essential, for the strengthening of connections, as shown in Chapter 10. It is also similar in predicting that stimuli previously associated with the reduction of hunger, such as food in the mouth, are primarily responsible for reinforcing food procurement responses. In this way it resembles also the explanation Rolls (1975) gives for reinforcement from intracranial electrical stimulation. He states that reinforcement is caused by the firing of systems that constitute AND gates for the presence of the physiological need and of stimuli that previously have occurred just before reduction of these needs. Electrical stimulation of these AND gate systems is also reinforcing, and therefore animals will learn to work for such intracranial stimulation. The present theory is in complete agreement with this suggestion and also, I believe, is able to show in a self-consistent manner why it should be so.
The theory therefore is not opposed to most of Hullās drive reduction theory (1943). Indeed it provides a mechanism by which both drive reduction and secondary reinforcement could affect the previously used neuronal pathways and make them stronger.
There is an even closer relationship with those theories suggesting that reinforcement is caused by optimal levels of various factors: optimal levels of receptor stimulation (Leuba, 1955; Wundt, 1874); optimal amounts of stimulus departure from an adaptation level (McClelland, Atkinson, Clark & Lowell, 1953); optimal levels of āperceptualizationā (McReynolds, 1956); optimal flow of information from the environment (Glanzer, 1958); optimal levels of stimulus complexity or novelty (Dember & Earl, 1957); and optimal levels of arousal (Berlyne, 1960; Hebb, 1955). As mentioned in the Foreword, the present theory really is an outgrowth of these optimal level theories. It is shown more specifically in Chapter 8 how such optimal levels for reinforcement are a direct corollary of the rest principle. Moreover, because the rest principle is assumed to apply to all neurons, there should be optimal levels of firing at the stimulus input level, at the level of analysis at which neurons are sensitive to stimulus change, and at still higher levels of analysis at which neurons are excited by specific features of the stimuli. It therefore encompasses all the previous optimal level hypotheses. The present theory, however, does not stop there. It also states that there should be optimal levels of firing for the neurons involved in thinking, decision making, response production, and motor control.
I disagree with Berlyneās (1960) conclusion that the reinforcement associated with these optimal levels is caused entirely by arousal reduction. The theory does predict, as mentioned before, that moderate increases in arousal can be reinforcing in some circumstances. In that sense the theory is in agreement with Berlyneās proposal that the reinforcing propert...
Table of contents
- Cover
- Title Page
- Copyright Page
- Table of Contents
- Preface
- 1. Introduction
- 2. Simulations of the Use Principle and Rest Principle with Neutral Stimuli
- 3. Learning to Eat
- 4. Classical Conditioning
- 5. Lateral Inhibition
- 6. Physiological Evidence Backing the Rest Principle
- 7. Instrumental Learning I: Drive and Stimulus Reduction
- 8. Instrumental Learning II: Stimulation Seeking, Optimal Levels, and Pleasure
- 9. Brain Structures and Neuronal Organizations
- 10. Sleep, Arousal, and Attention
- References
- Author Index
- Subject Index