To Guide Your Reading
What is âsoundâtoâsymbolâ and how does it differ from other teaching approaches used in the music classroom?
What connections are there between music and language?
Why are these connections important when considering an approach to teaching music?
How does treating music as a language affect the way we teach it?
What are the fundamental principles of Suzuki, KodĂĄly, Orff, and Gordon?
In what ways can competing soundâtoâsymbol theories co-exist and support each other in the classroom?
What are the common questions, confusions, and misconceptions people have concerning these teaching approaches?
How can soundâtoâsymbol principles be integrated with non-soundâtoâsymbol curriculums and method texts?
Consider the structure of many beginning instrumental music programs: After students choose their instrument, the first lessons cover how to hold it, how to shape an embouchure, where to place the fingers, how to produce an initial tone, how to read simple rhythms, how to hold a bow, and how to apply correct fingerings. Students are shown a whole note, told it contains four quarter notes and gets four beats, and are then asked to play that note, usually on the easiest tone to produce. More rhythms and notes are introduced until students are prepared for simple melodies, which they read using their newly learned knowledge about notation and fingerings.
Conservatories and professional orchestras are filled with exceptional musicians who are products of this system. Nevertheless, traditional method books may not reflect the way people naturally learn music, which many educational theorists argue parallels language learning. This is a process that begins with learning to speak and only then moves on to learning to read. This is often called âsoundâtoâsymbol,â or âsound before sign,â because it emphasizes aural recognition and understanding as precursors to theoretical knowledge and reading skills.
As we will see in
Chapter 5, soundâtoâsymbol is not a new concept. Johann Heinrich Pestalozzi (1746â1827), the Swiss educational reformer, Hans Georg Nägeli (1773â1836), Pestalozziâs disciple, Sarah Ann Glover (1785â1867), the first developer of the sol-fa system, and John Curwen (1816â1880), the English minister who refined it, all advocated similar philosophies. Yet somehow, at least in American public school instrumental music education, it remains on the periphery.
In the twentieth and twenty-first centuries, at least five music education philosophies reflect the soundâtoâsymbol approach: KodĂĄly, Suzuki, Orff, Gordonâs Music Learning Theory, and Dalcroze Eurhythmics (which will be discussed in detail in
Chapter 2). These music philosophies rely on three basic assumptions, all of which are connected in some way to spoken language:
1.Nearly everybody acquires the ability to speak without the benefit of formal training.
2.The processes of acquiring language and music parallel each other in key ways.
3.One of the most important steps in learning language or music is experiencing it and doing it.
Despite Henry Wadsworth Longfellowâs observation that it is the universal language, music is not technically classified as one. But perhaps it does not have to be a language in order to act like one. This is why soundâtoâsymbol philosophies assume a strong connection between language and music. This is common sense to many musicians, and the fields of linguistics, cognitive evolutionary science, and neuroscience are finding increasing evidence to support the connection.
Famed linguist Noam Chomsky was inspired by childrenâs innate ability to learn complex linguistic skills on their own, a process he called âlinguistic competence.â
1 Marveling over their ability to produce grammatically correct sentences they had never before heard, he postulated that all children are born with âformal universalsââgenetically encoded rules that operate in all languages.
2 Leonard Bernstein seized upon this idea and makes a case for a musical version of it in his 1973 lecture series at Harvard,
The Unanswered Question. In searching for a musical grammar that explains how we innately hear music, Bernstein painstakingly applies Chomskyâs grammatical rules to musical analogues, a process he admits is only âquasi-scientific.â
3 Bernsteinâs MusicalâLinguistic Analogues
musical motive ⨠noun
chord or harmony ⨠adjective
rhythm ⨠verb
Though they break down if we dig too deeply, Bernsteinâs analogies are food for thought as we work to establish a meaningful languageâmusic connection. Fortunately, cognitive evolutionary research and brain research pick up the argument where poetic reasoning leaves off. Although the brainâs music and language systems seem to operate independently, they share neurobiological origins in the same regions.
4 This helps explain how music shares several features with speech. Among them:
1.A set of grammatical and syntactical rules that govern a hierarchy of how structures (words, notes, etc.) are ordered and arranged.
2.Memorized ârepresentationsââIn the case of language, these representations are words and their meanings; in the case of music, these representations are melodies.
Music and language share important brain processes that interpret rhythmic structure.
5 For example, both music and speech group sound into patterns called phrases. There is also evidence of overlap in how the brain processes the boundaries between phrases.
6 A study published in the
Journal of Neuroscience, by Dr. Nina Kraus, showed not only that rhythm is related to language, but also that rhythm skills developed through music lead to strong language skills. Dr. Kraus measured how well a group of 100 teenagers were able to tap their fingers to a beat. She found that musically trained subjects who were best able to tap the pulse also showed enhanced neural response to language. According to Kraus, âIt turns out that kids who are poor readers have a lot of difficulty doing this motor task and following the beat. In both speech and music, rhythm provides a temporal map with signposts to the most likely locations of meaningful input.â
7 As described by Aniruddh D. Patel in his book,
Music, Language, and the Brain, the brain also seems to process melodic contours in music and language in a similar way.
8 If we examine the makeup of those contours closely, we find that both music and language share a predominance of small intervals between pitches,
9 something that could be explained by the physiological limitations of the human voice. But studies by Patel, Iversen, and Rosenberg (2006) reveal characteristics in the music of countries that reflect the tonal patterns of those countriesâ languages.
10 The study found that English speech patterns appear in English music and French speech patterns appear in French music
11âand these are differences that...