Music Cognition: The Basics
eBook - ePub

Music Cognition: The Basics

Henkjan Honing

Share book
  1. 164 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Music Cognition: The Basics

Henkjan Honing

Book details
Book preview
Table of contents
Citations

About This Book

Why do people attach importance to the wordless language we call music? Music Cognition: The Basics considers the role of our cognitive functions, such as perception, memory, attention, and expectation in perceiving, making, and appreciating music.

In this volume, Henkjan Honing explores the active role these functions play in how music makes us feel; exhilarated, soothed, or inspired. Grounded in the latest research in areas of psychology, biology, and cognitive neuroscience, and with clear examples throughout, this book concentrates on underappreciated musical skills such as sense of rhythm, beat induction, and relative pitch, that make people intrinsically musical creatures—supporting the conviction that all humans have a unique, instinctive attraction to music.

The scope of the topics discussed ranges from the ability of newborns to perceive a beat, to the unexpected musical expertise of ordinary listeners. It is a must read for anyone studying the psychology of music, auditory perception, or simply interested in why we enjoy music the way we do.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Music Cognition: The Basics an online PDF/ePUB?
Yes, you can access Music Cognition: The Basics by Henkjan Honing in PDF and/or ePUB format, as well as other popular books in Psicologia & Psicologia cognitiva e cognizione. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2021
ISBN
9781000451566

PART 1

De do do do,
de da da da
:
The tone of speech
and music

DOI: 10.4324/9781003158301-1

1

First listening experiences

DOI: 10.4324/9781003158301-2
Why do babies like infant-directed speech better than normal speech? Are we born musical? What do we know about the capacity for music and music cognition?
On the face of it, it is a strange phenomenon: adults who, the moment they lean over to peer into a baby buggy, start babbling a curious baby talk. And it doesn’t just happen to fathers and mothers; it overcomes many people in the same situation. In fact, we all seem to be capable of it, this “de do do do, de da da da.”
But what exactly are we saying to our little fellow human beings? What message can be derived from this “de do do do, de da da da?”
The technical term for this baby talk is infant-directed speech (ids). It is a form of speech that distinguishes itself from normal adult speech through its higher overall pitch, exaggerated melodic contours, slower tempo, and greater rhythmic variation. It appears to be a kind of musical language; however, it is one with an indistinct meaning and virtually no grammar. For these reasons, I will call it “musical prosody.” Babies love it, and coo with delight in response to the rhythmic little melodies, which often have the same charm as pop songs like The Police’s “De do do do, de da da da” and Kylie Minogue’s hit “La la la.”
Numerous sound archives around the world have recordings of these musical conversations between adults and children. If you listen to several of them, most of the time you won’t be able to understand what’s being said, but you will be able to identify the situation and particularly the mood because of the tone. It will quickly become apparent whether the message is playful, instructive, or admonitory. Words of encouragement, such as “That’s the way!” or “Well done!” are usually uttered in an ascending and subsequently descending tone, with the emphasis on the highest point of the melody. Warnings such as “No, stop it!” or “Be careful, don’t touch!” on the other hand, are generally voiced at a slightly lower pitch, with a short, staccato-like rhythm. If the speech were to be filtered out so that its sounds or phonemes were no longer audible and only the music remained, it would still be clear whether encouragement or warning was involved. This is because the relevant information is contained more in the melody and rhythm than it is in the words themselves.
Most linguists see the use of rhythm, dynamics, and intonation as an aid for making infants familiar with the words and sentence structures of the language of the culture in which they will be raised. Words and word divisions are emphasized through exaggerated intonation contours and varied rhythmic intervals, thereby facilitating the process of learning a specific language.
From a developmental perspective, the period during which parents use ids is remarkably long. Infants have a distinct preference for ids from the moment they are born, only developing an interest in adult speech after about nine months. Before that time, they appear to listen mostly to the (musical) sounds themselves. An interest in specific words, word division, and sound structure only comes after about a year, at which time they also begin to utter their first meaningful words. The characterization of ids as an aid to learning specific languages, therefore, seems less plausible to me, at least with respect to the earliest months of our lifespan.
An alternative might be to see the sensitivity to ids not as a preparation for speech but as a form of communication in its own right: a kind of “musical prosody” used to communicate and discover the world for as long as “real” speech is absent.
If you subsequently emphasize the type of information most commonly conveyed in those aspects of speech in which infants have the greatest interest during their first nine months, the conclusion must be that ids is, first and foremost, a way of conveying emotional information. It is an emotional language that, even without grammar, is still meaningful. The role of melody and rhythm in this emotional language is as significant as the role of word order is insignificant. This is because during their first year, infants are primarily interested in the musical aspects of babbling. Both caregivers and infants make use of the melodic, rhythmic, and dynamic aspects of ids; they speak the same “language”—the “language of emotion.”
In 2009 the scientific journal Current Biology published an empirical study with the intriguing conclusion that French babies cry differently than German babies. Recordings made by the researchers demonstrated convincingly that German newborns generally cry with a descending pitch contour; French newborns, on the other hand, with an ascending pitch contour, descending slightly only at the end. This was a surprising observation, particularly in light of the commonly accepted theory that when one cries, the pitch contour will always descend as a physiological consequence of the decreasing pressure during the production of sound. Apparently, though, babies only a few days old can influence not only the volume and dynamic contour, but also the pitch contour of their crying. Why would they do this?
The researchers interpreted these differences as the first steps in the development of language: in spoken French, the average intonation contour is ascending, while in German it is just the opposite. Knowing that human hearing is already functional during the last trimester of pregnancy led the researchers to conclude that these babies absorbed the intonation patterns of the spoken language in their environment during the last months of pregnancy and consequently imitated it when they cried. This observation was also surprising because until then it was generally assumed that infants only develop an awareness for their mother tongue between six and eighteen months, at the age they start babbling and learning their parents’ language. Could this indeed be unique evidence, as the researchers emphasized, that language sensitivity is already present at a very early stage, or could it be an indication of something entirely different?

Musicality precedes language

While the facts appear to be clear and convincing, this interpretation is a typical example of what one could call a language bias: the linguist’s understandable enthusiasm to interpret many phenomena in the world as linguistic. In this case, however, I believe it is a misjudgement. There is much more to suggest that these newborn babies exhibit an aptitude whose origins are found not in language but in music.
At the beginning of this chapter, we saw that babies possess a keen perceptual sensitivity for the melodic, rhythmic, and dynamic aspects of sound, aspects that linguists are inclined to categorize under the term “prosody,” but which are in fact the building blocks of music. Only much later in a child’s development does he make use of this “musical prosody,” for instance in recognizing word boundaries. But these very early indications of musical aptitude are in essence nonlinguistic. It is a matter of “illiterate listening”: a human ability to discern, interpret, and appreciate musical nuances already from day one, long before the baby has uttered, let alone conceived, a single word. It is the preverbal stage that is dominated by musical listening.
It will therefore come as no surprise that the musical components of ids also form an important part of speech later in life, although by then of course, we use rhythm, stress, and intonation infinitely more subtly than when talking to infants. From the tone of someone’s utterance, we can decipher whether he or she is happy, angry, or excited. C’est le ton qui fait la musique, it’s not what you say but how you say it. And we usually have little difficulty in deciding whether what is said is a question, an assertion, or an ironic remark.
But there are also other reasons for viewing ids as an early sign of musical behaviour rather than as a preparation for adult speech. The relationship between the linguistic aspects of ids (such as the meaning of words) and the musical aspects (such as rhythm and melody) is clearly visible, especially in those cultures where the native language is a tonal one, such as Mandarin Chinese. In tonal languages, a melody can easily conflict with the meaning of the word, which is determined by pitch. A well-known example is the word “ma” in Mandarin Chinese. Depending on the pitch at which it is uttered, it can mean either “mother” or “horse.” It is striking that in such cases the emotional information of ids “wins” over the purely phonemic aspects. During the earliest months, the musical information of a word is thus much more important than its specific meaning.
Canadian developmental psychologist Laurel J. Trainor has been conducting extensive research in this area. She is not alone in believing that the most important function of ids is to create and maintain an emotional relationship between the caregiver and the infant. She has shown in various studies that young infants have no difficulty at all in deciphering the emotional information in speech or in a children’s song at the moment it’s sung. It is very exciting when you realize that infants can derive specific emotional information from the complex timing, phrasing, and variations in pitch in the language their parents speak with them: minimal differences in pitch, intonation, and length of syllables, corresponding to emotions ranging from “comfort” to “fear” and “surprise” to “affection,” are interpreted correctly. Striking, too, is the fact that infants can distinguish these “melodious” emotions before they are able to recognize them in facial expressions. The cognitive functions involved in listening to music and speech appear to precede the development of visual perception. (In some ways, a head start would seem logical because babies already have functional hearing some three months before they are born.)
It appears that many of the musical skills we normally attribute to adults are also present in infants from the age of a few days to several months. Four-month-old babies can distinguish pitch intervals with great precision, as well as remember and recognize simple folksongs. Infants also seem to be much more sensitive to a wide range of subtle melodic and rhythmic differences than most adults. A study conducted at the University of Miami, in which both adults and six-month-old infants listened to melodies from Western and Javanese musical traditions, bears this out. Javanese melodies sound distinctly different to Western ears: the tones are tuned differently, have different frequency ratios (i.e., pitch intervals) than they do in our culture. In the listening experiment, one tone in each melody was tuned either slightly higher or slightly lower than normal. The adult listeners were easily able to identify these changes in the Western melodies, but not in the Javanese variations. The (North American) infants, on the other hand, could hear the differences in both the Western and the Javanese melodies.
All these studies support the idea that we’re born with a set of listening skills but can lose our sensitivity to specific musical nuances as we become accustomed to the conventions of the musical tradition in which we grow up. In the case of the Miami study, this means that the more deeply people in the West become embedded in prevailing music traditions, the less they are able to distinguish tonal nuances in the less frequently heard Javanese music.
The phenomenon of the loss of certain sensitivities relevant to our perception of music is paralleled in our linguistic development. Here, too, certain tonal nuances will often no longer be noticed or precisely reproducible at a later age. A case in point is the ability to hear the difference between an “r” and an “l.” Japanese infants can hear the difference with no difficulty at all, while Japanese adults struggle to make the distinction. In fact, humans lose flexibility in exchange for a more efficient processing of those aspects that are relevant to a specific language or musical tradition.
This kind of developmental psychology research has also been conducted on other aspects of music, such as rhythm and timing, with similar results. Infants and young children turn out to be extremely sensitive to melodic and rhythmic differences in speech and music, and often have a more highly developed sensitivity in these areas than the average adult. The flexibility of young children in experiencing and interpreting music disappears by about the time they start to go to school. By this age they will have been heavily influenced by culture-specific aspects of music such as tonal and harmonic structure. Such aspects are clearly learned as a result of exposure to the musical patterns characteristic of the music of the culture in which they are raised.
In short, the ability to recognize subtle differences in rhythm and pitch—in both speech and music—appears to be innate. From a young age we are very skilled decoders of the often emotionally laden, nonlinguistic information embedded in the musical prosody—the rhythm, stress, and intonation—of both music and speech. Language, with its specific word order and virtually unlimited lexicon and multiplicity of meanings, only blossoms much later in human beings. Therefore tells us something about the function of music in human development and the innateness of musicality to each individual. So how important is all this? Apart from the essential significance of musical prosody for emotional bonding between infants and parents, what is the evolutionary advantage of music and musicality?1

Music and evolution

In the evolutionary sense, music can be described as “pointless”: it does not quell our hunger, nor do we seem to live a day longer because of it. In fact, music appears to be of little use to us, aside perhaps from the pleasure that creating or listening to it affords us. This, at least, is what cognitive psychologist Steven Pinker maintains. At the end of the 1990s, he famously characterized music as “auditory cheesecake”: a delightful dessert but, from an evolutionary perspective, no more than a by-product of language (see Box 1.1).
Pinker provoked considerable anger in many music scholars at the time by contrasting language with music, and citing language as an example of evolutionary relevance and music as an example of evolutionary irrelevance. But might Pinker be right in this? Are there any arguments to show that music may have played a definitive role in man’s evolutionary development? Might music not be an “adaptation” after all, one that has contributed to the survival of man as a species? Or is music, as he suggests, no more than a pleasant side effect of more important functions like speech and language? Not an adaptation, but an “exaptation,” in which existing traits are put to new use (as an example: feathers originally served as insulation, but later, via gliding, were selected as an effective means of locomotion)?
Music scholars—and music educators in particular—took offence at Pinker’s ideas and have since put a great deal of effort into searching for scientific evidence to show that music does count (note that Pinker did not state that music does not matter, but that our capacity for music is probably not an adaptation). The so-called “Mozart effect” is...

Table of contents