Music and the Aging Brain
eBook - ePub

Music and the Aging Brain

Lola Cuddy, Sylvie Belleville, Aline Moussard, Lola Cuddy, Sylvie Belleville, Aline Moussard

Share book
  1. 474 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Music and the Aging Brain

Lola Cuddy, Sylvie Belleville, Aline Moussard, Lola Cuddy, Sylvie Belleville, Aline Moussard

Book details
Book preview
Table of contents
Citations

About This Book

Music and the Aging Brain describes brain functioning in aging and addresses the power of music to protect the brain from loss of function and how to cope with the ravages of brain diseases that accompany aging. By studying the power of music in aging through the lens of neuroscience, behavioral, and clinical science, the book explains brain organization and function. Written for those researching the brain and aging, the book provides solid examples of research fundamentals, including rigorous standards for sample selection, control groups, description of intervention activities, measures of health outcomes, statistical methods, and logically stated conclusions.

  • Summarizes brain structures supporting music perception and cognition
  • Examines and explains music as neuroprotective in normal aging
  • Addresses the association of hearing loss to dementia
  • Promotes a neurological approach for research in music as therapy
  • Proposes questions for future research in music and aging

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Music and the Aging Brain an online PDF/ePUB?
Yes, you can access Music and the Aging Brain by Lola Cuddy, Sylvie Belleville, Aline Moussard, Lola Cuddy, Sylvie Belleville, Aline Moussard in PDF and/or ePUB format, as well as other popular books in Psychologie & Neuropsychologie. We have over one million books available in our catalogue for you to explore.

Information

Year
2020
ISBN
9780128174234
Chapter 1

The musical brain

Stefan Koelsch1 and Geir Olve Skeie2, 3, 1Department for Biological and Medical Psychology, University of Bergen, Bergen, Norway, 2The Grieg Academy – Department of Music, University of Bergen, Bergen, Norway, 3Department of Neurology, Haukeland University Hospital, Bergen, Norway

Abstract

During listening, acoustic features of sounds are extracted in the auditory system (in the auditory brainstem, thalamus, and auditory cortex). To establish auditory percepts of melodies and rhythms (i.e., to establish auditory “Gestalten” and auditory objects), sound information is buffered and processed in the auditory sensory memory. Musical structure is then processed based on acoustical similarities and rhythmical organization, and according to (implicit) knowledge about musical regularities underlying scales, melodic and harmonic progressions, etc. These structures are based on both local and (hierarchically organized) nonlocal dependencies. In addition, music can evoke representations of meaningful concepts, and elicit emotions. This chapter reviews neural correlates of these processes, with regard to both brain-electric responses to sounds, and the neuroanatomical architecture of music perception.

Keywords

Musical processing; music-evoked emotions; musical syntax mismatch negativity (MMN)

Introduction

“Music” is a special case of sound: as opposed to a noise, or noise-textures (e.g., wind, fire crackling, rain, water bubbling, etc.), musical sounds have a particular structural organization in both the time and in the frequency domain. In the time domain, the most fundamental principle of musical structure is the temporal organization of sounds based on an isochronous pulse (the tactus, or “beat”), although there are notable exceptions (such as some kinds of meditation music, or some pieces of modern art music). In the frequency (pitch) domain, the most fundamental principle of musical structure is an organization of pitches according to the overtone series, resulting in simple (e.g., pentatonic) scales. Note that the production of overtone-based scales is, in turn, rooted in the perceptual properties of the auditory system, especially in “octave equivalence” and “fifth equivalence” (Terhardt, 1991). Inharmonic spectra (e.g., of inharmonic metallophones) give rise to different scales, such as the pelog and slendro scales (Sethares, 2005). Thus for a vast amount of musical traditions around the globe, and presumably throughout human history, these two principles (pulse and scale) build the nucleus of a universal musical grammar. Out of this nucleus, a seemingly infinite number of musical systems, styles, and compositions evolved. This evolvement appears to have followed computational principles described, for example, in the Chomsky hierarchy1 and their extensions (Rohrmeier, Zuidema, Wiggins, & Scharff, 2015)—that is, local relationships between sounds based on a finite state grammar, and nonlocal relationships between sounds based on a context-free grammar (possibly even a context-sensitive grammar; Rohrmeier et al., 2015).
By virtue of its fundamental structural principles (pulse and scale), music immediately allows several individuals to produce sounds together. Notably, only humans can synchronize their movements (including vocalizations) flexibly in a group to an external pulse (see also Merchant & Honing, 2014; Merker, Morley, & Zuidema, 2015). This ability is possibly the simplest cognitive function that just separates us from animals (Koelsch, 2018), which would make music the decisive evolutionary step of the Homo sapiens, maybe even of the genus homo. Animals have “song” and “drumming” (e.g., bird song, ape drumming etc.), but gorillas do not drum in synchrony, and whales do not sing unison in choirs. Making music together in a group, that is, joint synchronized action, is a potent elicitor of social bonding, associated with the activation of pleasure-, and sometimes even “happiness-”circuits in the brain. Analogous to Robin Dunbar’s vocal grooming hypothesis (Dunbar, 1993), music might have replaced manual grooming as the main mechanism for social bonding during human evolution. Dunbar’s hypothesis (according to which vocal grooming has paved the way for the evolution of language) is based on the observation that, similar to manual grooming, human language plays a key role in building and maintaining affiliative bonds and group cohesion. This process is putatively driven by increased group size, which increases the number of social relationships an individual needs to maintain and monitor. Once the number of relationships becomes too large, an individual can no longer maintain its social networks with tactile interactions alone, and selection will favor alternative mechanisms such as talking to several individuals at the same time. However, many more individuals can make music in a group (compared to the group size typical for conversations), and thus establish and foster social relationships. This makes music the more obvious candidate to maintain social networks, and increase social cohesion in large groups. Thus a musical grooming hypothesis seems at least as likely to explain the emergence of music as a vocal grooming hypothesis for the emergence of language.
Like music, the term “language” refers to structured sounds that are produced by humans, and similar to music, spoken language also has melody, rhythm, accents, and timbre. However, language is a special case of music because it does not need a pulse nor a scale, and because it has very rich and specific meaning (it can, e.g., be used very effectively to express who did what to whom). Ulrich Reich (personal communication) once noted that “language is music distorted by (propositional) semantics.” Thus the terms “music” and “language” both refer to “structured sounds that are produced by humans as a means of social interaction, expression, diversion or evocation of emotion” (Koelsch, 2014), with language, in addition, affording the property of propositional semantics. However, in language normally only one individual speaks at a time (otherwise the language cannot be understood, and the sound is unpleasant). By contrast, music provides the possibility that several individuals may produce sounds at the same time. In this sense, language is the music of the individual, and music is the language of the group.
These introductory thoughts illustrate that, at its core, music is not a cultural epiphenomenon of modern human societies, but at the heart of what makes us human, and thus deeply rooted in our brain. Engaging in music elicits a large array of cognitive and affective processes, including perception, multimodal integration, attention, social cognition, memory, communicative functions (including syntactic processing and processing of meaning information), bodily responses, and—when making music—action. By virtue of this richness, we presume that there is no structure of the brain in which activity cannot be modulated by music, which would make music the ideal tool to investigate the workings of the human brain. The following sections will review neuroscientific research findings about some of these processes.

We do not only hear with our cochlea

The auditory system evolved phylogenetically from the vestibular system. Interestingly, the vestibular nerve contains a substantial number of acoustically responsive fibers. The otolith organs (saccule and utricle) are sensitive to sounds and vibrations (Todd, Paillard, Kluk, Whittle, & Colebatch, 2014), and the vestibular nuclear complex in the brainstem exerts a major influence on spinal (and ocular) motoneurons in response to loud sounds with low frequencies, or with sudden onsets (Todd & Cody, 2000; Todd et al., 2014). Moreover, both the vestibular nuclei and the auditory cochlear nuclei in the brainstem project to the reticular formation (also in the brainstem), and the vestibular nucleus also projects to the parabrachial nucleus, a convergence site for vestibular, visceral, and autonomic processing in the brainstem (Balaban & Thayer, 2001; Kandler & Herbert, 1991). Such projections initiate and support movements, and contribute to the arousing (or calming) effects of music. Moreover, the inferior colliculus encodes consonance/dissonance (as well as auditory signals evoking fear or feelings of security), and this encoding is associated with preference for more consonant over more dissonant music. Notably, in addition to its projections to the auditory thalamus, the inferior colliculus hosts numerous other projections, for example, into both the somatomotor and the visceromotor (autonomic) system, thus initiating and supporting activity of skeletal muscles, smooth muscles, and cardiac muscles. These brainstem connections are the basis of our visceral reactions to music, and represent the first stages of the auditory-limbic pathway, which also includes the medial geniculate body of the thalamus, the auditory cortex (AC), and the amygdala (see Fig. 1.1). Thus subcortical processing of sounds does not only give rise to auditory sensations, but also to somatomotor and autonomic responses, and the stimulation of motoneurons and autonomic neurons by low-frequency beats might contribute to the human impetus to “move to the beat” (Grahn & Rowe, 2009; Todd & Cody, 2000).
image

Figure 1.1 Illustration of the auditory-limbic pathway. Several nuclei of the auditory pathway in the brainstem, as well as the central nuclei group of the amygdala, give rise to somatomotor and autonomic responses to sounds. Note that, in addition to the auditory nerve, the vestibular nerve also contains acoustically responsive fibers. Also note that nuclei of the medial geniculate body of the thalamus project to both the auditory cortex and the amygdala. The auditory cortex also projects to the orbitofrontal cortex and the cingulate cortex (projections not shown). Moreover, amygdala, orbitofrontal cortex, and cingulate cortex have numerous projections to the hypothalamus (not shown) and thus also exert influence on the endocrine system, including the neuroendocrine motor system.
In addition to vibrations of the vestibular apparatus and cochlea, sounds also evoke resonances in vibration receptors, that is, in the Pacinian corpuscles (which are sensitive from 10 Hz to a few kHz, and located mainly in the skin, the retroperitoneal space in the belly, the periosteum of the bones, and the sex organs), and maybe even responses in mechanoreceptors of the skin that detect pressure. The international concert percussionist Dame Evelyn Glennie is profoundly deaf, and hears mainly through vibrations felt in the skin (pe...

Table of contents