Psychology
Hebbs Theory
Hebb's Theory, proposed by psychologist Donald Hebb, suggests that when two neurons are repeatedly activated together, the connection between them strengthens. This concept, known as "Hebbian learning," forms the basis for understanding how neural networks in the brain develop and adapt through experience. Essentially, Hebb's Theory emphasizes the role of synaptic plasticity in shaping learning and memory processes.
Written by Perlego with AI-assistance
Related key terms
1 of 5
10 Key excerpts on "Hebbs Theory"
- eBook - ePub
Brain and Behaviour
Revisiting the Classic Studies
- Bryan Kolb, Ian Whishaw, Bryan Kolb, Ian Whishaw, Author(Authors)
- 2016(Publication Date)
- SAGE Publications Ltd(Publisher)
The Organization of Behavior to be the most important contribution to psychological theory in recent years.” In a lengthy review, Leeper (1950) stated that “There are so many respects in which Hebb’s book is so high in quality and is so delightfully written that it will have an assured status in psychology.”Figure 7.5 Hebb’s illustration of the concepts of synaptic plasticity and the formation of a cell assembly. (a) A hypothetical group of neurons in Area 17 and 18 of the visual cortex, illustrating Hebb’s concept of how repeated stimuli in the visual receptive field area corresponding to the shaded area may lead to a recurrence of firing patterns between these neurons, with the consequence that the connections AC and CB are strengthened. (b) Illustration of another hypothetical situation in which A, B and C are neurons in visual Area 18, and are all strongly activated by a particular stimulus. The remaining neurons D, E and X form direct or indirect connections with the initial three neurons. Hebb proposed that the appropriate stimuli would result in strengthening of numerous connections including AE, BC, BD, eventually resulting in an increase in the probability of coordinated activity among the various neuronal pairs. (c) Identification of synaptically coupled neurons using monosynaptically restricted transsynaptic tracing with a deletion-mutant rabies virus. The mutant virus lacks a gene encoding a glycoprotein essential to the viruses’ ability to infect synaptically coupled neurons. Additionally, infection of a neuron with the mutant rabies virus requires expression in the target neuron of a specific receptor protein not normally present in mammalian neurons. In an initial step, a subpopulation of cortical neurons were transfected with genes encoding: first, the viral receptor protein, thus allowing infection with the modified rabies virus; second, the normal rabies virus glycoprotein that allows the virus to infect neurons presynaptically connected to the infected cell; and third, dsRed for identification. The mutant virus expresses green fluorescent protein (GFP), thus neurons presynaptic to the initially infected neurons appear green. Initially infected neurons express both dsRed and GFP, and appear yellow. (d and e) dsRed expression alone and transsynaptically labelled neurons expressing GFP around the neuron identified at the end of the dashed white line in (c). Scale bars 200 mm. (Source - eBook - PDF
Artificial Neural Networks
Architectures and Applications
- Kenji Suzuki(Author)
- 2013(Publication Date)
- IntechOpen(Publisher)
Also, the word “connectionism” appeared for the first time: “The theory is evidently a form of connectionism, one of the switchboard variety, though it does not deal in direct connections between afferent and efferent pathways: not an ’S-R’ psychology, if R means a muscular response. The connections server rather to establish autonomous central activities, which then are the basis of further learning” [21]. According to Hebb, knowledge is revealed by associations, that is, the plasticity in Central Nervous System (CNS) allows synapses to be created and destroyed. Synaptic weights change values, therefore allow learning, which can be through internal self-organizing: encoding of new knowledge and reinforcement of existent knowledge. How to supply a neural substrate to association learning among world facts? Hebb proposed a hypothesis: connections between two nodes highly activated at the same time are reinforced. This kind of rule is a formalization of the associationist psychology, in which associations are accumulated among things that happen together. This hypothesis permits to model the CNS plasticity, adapting it to environmental changes, through excitatory and inhibitory strength of existing synapses, and its topology. This way, it allows that a connectionist network learns correlation among facts. Connectionist networks learn through synaptic weight change, in most cases: it reveals statistical correlations from the environment. Learning may happen also through network topology change (in a few models). This is a case of probabilistic reasoning without a statistical model of the problem. Basically, two learning methods are possible with Hebbian learning: unsupervised learning and supervised learning. In unsupervised learning there is no teacher, so the network tries to find out regularities in the input patterns. In supervised learning, the input is associated with the output. - eBook - PDF
Spiking Neuron Models
Single Neurons, Populations, Plasticity
- Wulfram Gerstner, Werner M. Kistler(Authors)
- 2002(Publication Date)
- Cambridge University Press(Publisher)
In this chapter we consider the simplest set of rules, viz., synaptic changes that are driven by correlated activity of pre-and postsynaptic neurons. This class of learning rule can be motivated by Hebb’s principle and is therefore often called “Hebbian learning”. 10.1 Synaptic plasticity Since the 1970s, a large body of experimental results on synaptic plasticity has been accumulated. Many of these experiments are inspired by Hebb’s postulate (Hebb, 1949), When an axon of cell A is near enough to excite cell B or repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A ’s efficiency, as one of the cells firing B , is increased. 351 352 Hebbian models A B ij w j k i Fig. 10.1. The change at synapse w i j depends on the state of the presynaptic neuron j and the postsynaptic neuron i and the present efficacy w i j , but not on the state of other neurons k . which describes how the connection from presynaptic neuron A to a postsynaptic neuron B should be modified. Today, 50 years later, this famous postulate is often rephrased in the sense that modifications in the synaptic transmission efficacy are driven by correlations in the firing activity of pre-and postsynaptic neurons, cf. Fig. 10.1. Even though the idea of learning through correlations dates further back in the past (James, 1890), correlation-based learning is now generally called Hebbian learning . Hebb formulated his principle on purely theoretical grounds. He realized that such a mechanism would help to stabilize specific neuronal activity patterns in the brain. If neuronal activity patterns correspond to behavior, then the stabilization of specific patterns implies the learning of specific types of behaviors (Hebb, 1949). 10.1.1 Long-term potentiation When Hebb stated his principle in 1949, it was a mere postulate. - eBook - PDF
Concise Learning and Memory
The Editor's Selection
- (Author)
- 2010(Publication Date)
- Academic Press(Publisher)
15 Neural Computation Theories of Learning S. B. Moldakarimov , Salk Institute for Biological Studies, La Jolla, CA, USA T. J. Sejnowski , Salk Institute for Biological Studies and University of California at San Diego, La Jolla, CA, USA ª 2008 Elsevier Ltd. All rights reserved. 15.1 Introduction 317 15.2 Hebbian Learning 318 15.3 Unsupervised Hebbian Learning 319 15.4 Supervised Learning 320 15.5 Reinforcement Learning 322 15.6 Spike-Timing Dependent Plasticity 323 15.7 Plasticity of Intrinsic Excitability 325 15.8 Homeostatic Plasticity 326 15.9 Complexity of Learning 326 15.10 Conclusions 328 References 328 15.1 Introduction The anatomical discoveries in the nineteenth century and the physiological studies in the twentieth cen-tury showed that brains were networks of neurons connected through synapses. This led to the theory that learning could be the consequence of changes in the strengths of the synapses. The best-known theory of learning based on synaptic plasticity is that proposed by Donald Hebb, who postulated that connection strengths between neurons are modified based on neural activ-ities in the presynaptic and postsynaptic cells: When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased. (Hebb, 1949) This postulate was experimentally confirmed in the hippocampus with high-frequency stimulation of a presynaptic neuron that caused long-term potentiation (LTP) in the synapses connecting it to the postsynaptic neuron (Bliss and Lomo, 1973). LTP takes place only if the postsynaptic cell is also active and sufficiently depolarized (Kelso et al., 1986). This is due to the N -methyl-D -aspartate (NMDA) type of glutamate receptor, which opens when glutamate is bound to the receptor, and the postsynaptic cell is sufficiently depolarized at the same time ( See Chapter 16). - eBook - PDF
Interpretation
Ways of Thinking about the Sciences and the Arts
- Peter Machamer, Gereon Wolters, Peter Machamer, Gereon Wolters(Authors)
- 2014(Publication Date)
- University of Pittsburgh Press(Publisher)
As the reader will appreciate, the standard philosophical accounts of the learning process —inductivism, hypothetico-deductivism, Bayesian probabilistic co-herentism—all presuppose a background framework of meaningful concepts already in play, a framework within which their preferred forms of proposition-evaluation must take place. Where such concep-tual frameworks come from to begin with—that is, how they originate —is a matter that is either ignored or badly fumbled by all of those accounts. (Think of the standard difficulties with a Locke/Hume-style concatenative empiricism, and with a Descartes/Fodor-style concept nativism.) The Hebbian story explored in the present essay—concern-ing how the raw statistics of a creature’s sensory experience can make the creature differentially sensitive to certain prototypical patterns of neuronal activation—offers an account of how a structured family of prototypical categories can slowly emerge and form the background conceptual framework in which subsequent sensory inputs are prefer-entially interpreted. It offers an account of learning that needs no ante-cedent conceptual framework in order to get an opening grip on the Concept Formation via Hebbian Learning ≤≠∑ world. On this account, our concepts gradually emerge as an integral part of our growing understanding of the objective, lawlike relations that unite them and divide them. They come into existence together— not the concepts before the laws, as the philosophical tradition has long assumed. A Temporal Structure Grasped by Hebbian Learning: A Simple Case The progressive Hebbian enhancement of any synaptic connection is a function, as we saw, of the temporal conjunction of an excitatory sig-nal arriving to a specific synapse at the same time that the post-synaptic neuron is already in a state of raised excitation. - eBook - PDF
The Science of Learning and Development in Education
A Research-based Approach to Educational Practice
- Minkang Kim, Derek Sankey(Authors)
- 2022(Publication Date)
- Cambridge University Press(Publisher)
The relationship between education and the science of learning is reinforced in Chapter 2 by focusing on the work of Donald Hebb (1949), a school teacher who became a distinguished brain scientist and laid down the foundations of synaptic plasticity that continues to under- pin current research in the science of learning. Most of the learning occurring in schools today is Hebbian learning. Readers discover that those surrounding Hebb also made highly formative contributions to the science of learning, including Karl Lashley, Wilder Penfield (1958) and Hebb’s doctoral student, Brenda Milner. Chapter 2 also provides an introduction to the basic anatomy of the brain, synaptic functioning, long-term memory and long-term potentiation (LTP). Learning and development, brain science and complex systems Education research and classroom practice Research methods including EEG and fMRI Teacher education, and the teacher’s professional toolkit The educational context of schools and classrooms Philosophical and critical analysis, critical realism Hebbian learning, selectionist learning Brain plasticity, LTP, working memory, and metacognition Concept map 1 The science of learning and development 3 Whereas Chapter 2 mainly refers to the science of learning, Chapter 3 focuses on the science of development and the major change that has occurred in developmental science over the past quarter of a century, with the introduction of dynamic systems theory (DST). This change has major implications for educational research and practice, not least because it challenges many of the theoretical assumptions that have guided education over the past 60 years, or so. The origins of DST in mathematics, chemistry, physics and meteorology are briefly explored as a means of introducing non-specialist readers to the core concepts of DST, including the idea that children are complex, emergent, self-organising beings. - eBook - PDF
- Jean Delacour(Author)
- 1994(Publication Date)
- World Scientific(Publisher)
There are several versions of this theoretical synapse; it is the cornerstone of most of the recent models (Brown et al. 1990; Delacour and Levy 1988) though its existence is still controversial (Kelso et al. 1986; Viana di Prisco 1984). Hebb's synjipse can be defined as follows: the efficacy, the weight of a synapse between neuron A (pre-synaptic) and neuron B (post-synaptic) increases if there is a positive correlation between the activities of A and B; for instance when Memory System 31 A discharges, then B discharges at the same time or shortly thereafter. This condition is obviously satisfied if excitatory afferents to A and B have a high degree of synchronization. The hypothesis (Delacour 1988) according to which a local increase in synchronization, limited to small networks, may have a functional value, recently had a remarkable experimental confirmation. Visual stimuli evoke synchronous rhythmic activity, at about 40Hz, in neurons of the visual cortex belonging to the same column or to different columns detecting the same features (Gray and Singer 1989; Gray et al. 1989). This synchronous oscillation would play a role in the representation of the visual world (Engel et al. 1991a,b,c; see also the model of Rujan in this volume). It may have an intracortical origin (Silva et al. 1991) or depend on thalamo-cortico-thalamic loops (Llinas et al. 1991; Steriade et al. 1991) in which that part of the thalamus we called T may have a special importance. The hippocampus may also be involved when its EEG activity is in the theta mode. As already stressed, the theta rhythm reflects a synchronized state of the hippocampus and it is conceivable that this state is projected to the neocortex. - eBook - PDF
Minds and Machines
Connectionism and Psychological Modeling
- Michael R. W. Dawson(Author)
- 2008(Publication Date)
- Wiley-Blackwell(Publisher)
However, with this increased understanding of long-term potentiation, and with an emerging and detailed understanding of neural mechanisms, there has also been an increased need to propose more sophisticated models of synaptic change. Brown (1990) notes that there have been anywhere from 50 to 100 Building Associations 169 theories of this type, and proceeds to review only a subset of these. Brown classifies them as being Hebbian algorithms, generalized Hebbian algorithms, and global control algorithms. Again, one question to ask is how might these more sophisticated rules be incorporated into the models that have been described in the current chapter. Do these rules result in solving some problems that were not solved by the delta rule? If implemented, do these rules lead to behavioral results that are more or less consistent with the performance of human subjects in memory experiments? One theme that seems to be emerging in even this cursory glance at the current state of research related to distributed associative memories is that, while inter-esting, the versions of the networks that were described in this chapter are not as powerful as would seem to be required to keep up with advances in the field. What general approach could be used to increase the power of these networks? In the next two chapters, we will consider two very basic – but critical – modifica-tions. In Chapter 10, we will look at some of the implications of changing the activation function from being linear (as is the case in Equation 9.2) to being nonlinear. In Chapter 11, we will discuss how the use of a nonlinear activation function permits even more power through the use of additional layers of pro-cessing units separating network input from network output. 170 Making Decisions Chapter ten Making Decisions In the most general sense, a psychological theory attempts to explain the relation-ship between stimuli and responses. - William B. Levy, James A. Anderson, Stephen Lehmkuhle(Authors)
- 2014(Publication Date)
- Psychology Press(Publisher)
Virtually all neuroscientists are convinced that precisely specified changes in synaptic coupling store memory. The chapters elsewhere in this volume give numerous examples of physiological evidence supporting this belief. In general, it is impossible to form new neurons after birth, at least in mammals. Large changes in dendritic branching and formation of new synapses are possible in immature organisms, but are unlikely in adults. Therefore changes in strength of preexisting synapses are the most likely candidate for learning in adults. My favorite candidate for the modifiable structure in adult mammalian neocortex is the dendritic spine. These structures seem to respond to experience in some degree and have electrical properties as well as anatomies that seem highly suitable for modification.In any case, detailed changes in the coupling between cells seems to be the mechanism. The most common departure point for further development seems to be the suggestion made by Donald Hebb in 1949. Hebb’s suggestion was stated as follows:When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic changes take place in one or both cells such that A’s efficiency as one of the cells firing B, is increased [p. 62].This suggestion predicts that cells will tend to become correlated in their discharges, and this kind of synapse is sometimes called a correlational synapse.Several of the chapters in this volume (Levy, Singer, Cooper) review the evidence for modification equations that are similar to those suggested by Hebb. Obviously, the details are of immense importance and interest. Clearly, also, we now have some physiological evidence that such systems exist in mammalian cortex.The approach taken here is to derive some very straightforward conclusions that arise if we simply assume that a very simple Hebbian scheme exists. We also briefly describe some simple simulations showing how robust these systems can be, and how resistant they are to damage and to parametric variations.Let us assume in general that synaptic change is related to the magnitude of both pre- and postsynaptic activity. Then we can immediately show that the system can act as a general purpose associator. As suggested by Hebb, correlational synaptic modification leads to associative structures. Because we know that human cognition is very strongly associative, this is a remarkably interesting result.- eBook - ePub
Mechanisms of Cognitive Development
Behavioral and Neural Perspectives
- James L. McClelland, Robert Siegler, James L. McClelland, Robert Siegler(Authors)
- 2001(Publication Date)
- Psychology Press(Publisher)
reinforcement learning algorithm (Barto, 1992) modulates the degree of Hebbian learning with outcome information. This algorithm has been shown to be quite effective in training networks to solve problems thought to require a more explicit error-correcting algorithm, and there is circuitry in the brain that may well allow the broadcast of reinforcement information to many of the synapses that may participate in perceptual learning. Thus, it is quite possible that Hebb’s proposal was partially correct, but that it should be expanded to allow for modulation by outcome information, and possibly by other factors such as emotional state (McGaugh, 1989).Conclusion
In this chapter, I have considered Hebb’s proposal for learning in the brain, and I have suggested that his proposal may provide a partial guide to understanding some of the circumstances under which experience fails to lead to improvements in performance. The proposal provides a common account of failures of learning in amnesics and in normal second language learners, and has been incorporated in a simple simulation model that captures some aspects of critical period effects in the acquisition of contrasts between phonemes not distinguished in one’s own native language. It also leads to testable predictions about what sorts of training regimes might lead to success or failure in teaching second language learners speech contrasts that are otherwise quite difficult for them to learn. That said, some additional observations that have arisen both from our modeling work and our empirical investigations suggest that Hebb’s proposal for learning may turn out only to be a partial guide to the conditions under which humans and other organisms can improve their performance as a result of experience. There are many reasons to suppose that a complete account of how learning occurs in the brain will go beyond Hebb’s initial proposal, and many reasons to suppose that additional insights into the successes and failures of learning at a behavioral or functional level will emerge as this more complete account is developed.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.









