
eBook - ePub
Audio Anecdotes III
Tools, Tips, and Techniques for Digital Audio
- 504 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
Audio Anecdotes III
Tools, Tips, and Techniques for Digital Audio
About this book
This collection of articles provides practical and relevant tools, tips, and techniques for those working in the digital audio field. Volume III, with contributions from experts in their fields, includes articles on a variety of topics, including: - Recording Music - Sound Synthesis - Voice Synthesis - Speech Processing - Applied Signal Processing
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weâve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere â even offline. Perfect for commutes or when youâre on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Audio Anecdotes III by Ken Greenebaum, Ronen Barzel, Ken Greenebaum,Ronen Barzel in PDF and/or ePUB format, as well as other popular books in Art & Art General. We have over one million books available in our catalogue for you to explore.
Information
Topic
ArtSubtopic
Art GeneralRecording Music
How Recordings Are Made I:
Analog and Digital Tape-Based Recording
Most modern movie-goers and television watchers are aware of the various forms of âtrickeryâ involved in bringing scenes to cinematic life. We take for granted that there were probably multiple takes; that the dialog might have been dubbed in later to fix poor location recording; or that the sound of a blender mixing up a smoothie or a pistol being fired were added later in a sound effects suite. What most people donât realize is that this same level of sophisticated production is found in most modern audio recordings. The techniques used in music recording are fascinating in their own right, and they can enhance oneâs appreciation of the final product. See also Rogersâ article âThe Art and Craft of Song Mixingâ (page 29) later in this chapter for a discussion of how such techniques are used to artistic effect.
Iâll start this article by providing some background about the âtraditionalâ hardware that is available in the recording studio. (Until recently, I would have called this the âmodernâ hardware, but the development of digital hard-disk-based recording is changing studio hardware, as discussed in the next article. Still, the traditional principles and techniques described in this chapter carry forward into that world.)
1 Multitrack Recording
Most popular music (rock, country, alternative) CDs use multitrack recording, in which different instruments (or different parts of an instrument) are recorded on distinct, separate regions of recording tape or a computerâs hard disk. The most common systems use 24-tracks. In tape-based recording, several of these machines can be linked together to create 48-track and 72-tracks. In virtual or disk-based recording, additional tracks are subject to the number of buses available, the disk access speed, and the memory limitations of the computer.
If this concept of multiple tracks is new to you, consider your stereo cassette player or CD player. These have two tracks known as left and right, that is, two independent channels of audio information. The information on one track is processed using completely separate electronics from the other track, and this is why you are able to hear separate information coming from your two stereo speakers. (If you have more than two speakers, in a surround arrangement, the information coming from the third through nth speakers used to be extracted artificially from the two stereo tracks, and was not created in the original recording session. True multichannel audio recordings are just beginning to be commercially released on DVD-audio and SACD). Now, by convention, what we hear coming from the two speakers are parts of the same song and they are time-locked (synchronized) so we can listen to both tracks together and they make sense. But this does not have to be so; I have a CD of Leonard Bernstein discussing Beethovenâs â5th Symphony,â in English on the left channel (one of the stereo tracks) and in German on the right channel (the other stereo track). Using the balance knob on my amplifier, I can choose to listen to only one of the tracks or both. Theoretically, record companies could manufacture CDs with two mono tracks in parallel, of different performances, and you would get twice as much music on one CD. So for example, on older recordings of Duke Ellingtonâs Orchestra (made before there was stereo), you could have two Ellington albums on one CDâyouâd just have to set the balance knob so that you wouldnât hear the cacophony that would be created by playing back both at the same time.
Now, extend the concept of two tracks to a multitrack tape recording system which has 24, 32, or 48 independent tracks. The output of each of these tracks feeds a separate preamplifier built into a mixing console in the studio, or a virtual console on your computer monitor. Instead of having a balance control with 48 positions (awkward, to say the least), a recording engineer can decide which of the tracks to play by adjusting a separate volume control for each, or turning each track on and off with a switch (called the mute button). It is important to understand that these 24 (or however many) tracks are both time-locked and distinct. They can be recorded or played back one at a time or in any combination, without interfering with each other. This simple fact enables a number of interesting recording techniques.
First, the musicians donât all have to perform their parts at the same time. If a band decides to add a saxophone solo after theyâve finished recording a song, the sax player just adds her part to an empty track. It doesnât disturb parts that were already recorded. Conversely, if the group decides that they donât want to use a guitar solo they had recorded earlier, they just donât turn that track on (they can even erase it) and the rest of the parts remain undisturbed. Many groups exploit this feature of multitrack recording and add all kinds of parts just to see what they sound likeâbackground vocals, horns, strings, and so onâand let the producer or mixing engineer decide later what to keep and what to throw out. The mixing engineer is the engineer who combines all the tracks into a two-channel âmix,â and decides how to allocate the various instruments to the left-right stereo soundfield.
Second, a given musician can play more than one instrument, and listen back to the previously recorded instruments while he is doing so to provide a reference. The guitarist and inventor Les Paul was the first to employ this technique, and Stevie Wonder, Prince, and The Beatles have all used it to great effect.
A third advantage of separate, multiple tracks is that each track can be modified or processed individually without affecting other tracks. Signal processing devices, such as compressors, expanders, tonal equalizers, noise gates, digital reverberation simulators, and digital delays can be applied to any one or multiple tracks, and they can be applied after the sound was recorded. Most high-end recording consoles and digital audio workstations have built-in parametric equalizers (EQ) on every track, allowing the engineer a wide range of tonal control over every track. For example, suppose that an electric guitar, electric bass, and acoustic guitar are recorded on three separate tracks. Maybe the electric guitar sounds too shrill, the bass sounds too muddy, and the acoustic guitar sounds too dark. Any time during the recording process, the engineer can modify these sounds by applying EQ to them individually. Multiple signal processing devices can be chained, so in this case, the engineer might EQ the bass to make it less muddy, run it through a noise gate to get rid of hum that was present in the background of the studio that day, then run it through a compressor (to even out the overall volume of the performances), and finally, another stage of EQ. This specific scenario is actually not all that uncommon.
2 The Basic Tracks
The typical way that rock and country music are produced is to record the rhythm section firstâusually the drums, bass guitar, and maybe a rhythm guitar. At this time, the vocalist records a scratch vocalâa temporary vocal track just to help the rhythm players keep track of where they are in the song. The vocalist typically doesnât give it his all at this stage and the engineer doesnât always bother to set up a particularly good microphone, because the plan is to replace this vocal (overdub it) later with a better performance. You can often find a lot of joking around on these scratch vocal tracks.
John Lennon was working on a new album in 1980 which eventually became Milk and Honey. He had recorded scratch vocals to accompany the musiciansâ basic tracks, but he was killed before any final vocals were recorded. The vocals you hear on the version of the album that has been released were what Lennon had intended only as temporary vocals, and so they contain a certain degree of casualnessâand an absence of full voice singingâthat would not normally be found on a final vocal.
The various instruments used in the rest of the piece are usually added one at a time. Musicians adding a new part can listen to any combination of the instruments already recorded, in any volume mix that they choose. A rhythm guitarist might want to hear lots of bass and drums so he can keep time; a lead guitarist might want to hear lots of keyboards so he can hear the chord changes better.
This is the norm in popular and country recording. Traditional jazz, classical, bluegrass, and folk have followed a different tradition. In these genres, the musical communication between players is considered an essential part of the performance, and they would never consider playing separately from one another. Neil Young is an example of a rock artist who tends to favor live recordings with minimal overdubs, but he is an exception in the rock world. One of the issues here is purely technical: To create a clean rock recording with loud electric guitars is difficult to do when the guitar amps, the drums, and the vocalist are all playing in the same room at the same time, because the sound of the instruments leaks into the microphones of the other instruments, creating a muddy sound. If you care to, listen to Led Zeppelin III and Houses of the Holy to hear the radical difference in recording quality as the group moved from live recording to an overdub approach, the latter of which allowed for sonic isolation between the instruments and the attendant improvements in sound quality.
There is also a movement, at the vanguard of audio engineering, to use as little audio processing as possible. These engineers often boast on album covers that they have used no EQ, no digital reverberation, etc. The results can sound stunningly lifelike, but pulling this off requires a great sounding musician to begin with, and a great deal of skill on the part of the engineer. One famous example of an album with no equalization is Steely Danâs Countdown to Ecstasy, recorded by Grammy-award winning Roger Nichols. To record an entire album without any outboard effects is a challenge, but it does not guarantee a superior product. Some of the best engineers in the worldâRoger Nichols, Bruce Swedien, and George Massenburg, for exampleâuse outboard signal processing devices judiciously to create beautiful recordings, and in many cases, to create interesting hyperrealities.
3 Soundscape
3.1 Illusions of Perspective: Realism versus Hyperrealism
One of the most interesting aspects of cinematography is that we are able to see on the movie screen things that we could never see in real life. A classic example of this is the movie chase scene. In the theater, we can see the pavement speeding by from a camera mounted on the door of the car, or we can see the road ahead from a camera mounted on the front bumper. In a sense, these are very unrealistic vantage pointsâwe rarely are able to put our eyeballs in these positions. An even more startling example of an impossibility is when the director cuts from one of these cameras to another, allowing you to see two very different perspectives in rapid succession. What the director and cinematographer are conveying is an intentionally unrealistic view of the world; they are providing a set of impossible perspectives in order to provide excitement and a sort of hyperrealism. Please see Baileyâs article âSpatial Emphasis of Game Audioâ (page 399), where such techniques are applied to video games to create hyperrealistic cinematic experiences.
Of course, chase scenes arenât the only use of techniques that create unreal perspectives. Even simple head shots of someone talking give the illusion that your eye is only three inches from the personâs face, revealing pores and details most of us never see. Modern recording also uses technology to create hyperrealities.
3.2 Microphone Placement
One common technique is based on a simple conceptâmicrophone placement. For example, when recording an acoustic guitar, the engineer might use two microphones, one at each end of the guitar, and record these onto two separate tracks. During mixing, one of these tracks is assigned to the left stereo field, and the other to the right stereo field. If you listen back at home and your speakers are eight feet apart, it sounds like the guitar is eight feet wide! (It also sounds like your head is right in the middle of the guitar, which of course it couldnât be in real life, or the guitarist would be strumming your face.) In headphones, the illusion of your head being right inside the guitar is even more compelling because there is virtually no air between the transducers and your ear. The guitarist Alex deGrassi records his acoustic guitars using this technique, which is particularly evident on his albums The Worldâs Getting Loud and Slow Circle.
Any instrument can be recorded in this way, known as stereo mics split panned. Split panning refers to the two mics being split in the stereo image, so that one is assigned completely to the left channel and the other is assigned completely to the right channel (the pan pot used for panning is an abbreviation for the control knob which is officially called a panoramic potentiometer). With only one mic, the instrument can be assigned to one speaker or the other, or to any arbitrary point between them. Only by rendering the signal with two mics, however, can the sound break free of point source localization and begin to take up more space in the stereo image, the ultimate being the illusion that the instrument is surrounding the listener. Grand pianos are often recorded this way, too, in popular, jazz, and classical music, because it gives the listener a sense of being enveloped in sound.
Other instruments lend themselves to different spatial effects. Drums are typically recorded with one microphone on each individual drum, and these are panned in a semicircular arc, emulating the sound that a drummer would hear sitting at the drums: the high-hat just to the left, the ride cymbal on the right, the snare and kick drums in the middle, and the tom-toms sweeping around the arc of a semicircle, from left to right. The sound we hear through our speakers and headphones, however, is typically much better than the drummer actually hears; because the mics are placed adjacent to each sound source, each percussive component conveys the sound it would if your ear were right up next to it. Stevie Wonder was one of the first to do this, working with engineers Malcolm Cecil and Bob Margouleff, on his album Music of My Mind.
The same is true with vocalsâthe engineer typically places a very sensitive microphone an inch or two in front of the singer. This makes it sound as though your ear is just in front of the singerâs mouth. In ballads, this adds intimacy to the performance, especially when listening back in headphones; in heavy metal, it adds a great deal of power, and gives the vocals a presence that keeps them from being swallowed up by the other instruments in the mix. Again, in real life, our ears are never just two inches from the singerâs mouth, but through recording we experience this illusion. For years, my favorite example of this was Paul McCartneyâs vocal on âHoney Pieâ from the Beatleâs White Album. The micâprobably a Telefunken M49âis so close to his mouth, you can actually hear his lips part just before he pronounces the âpâ in the word âpieâ; when he sings the word âcrazy,â you can hear the air moving as he sets his mouth to pronounce the âc.â Recently, I found a recording that conveys this effect even betterâAimee Mannâs vocals on âJacob Marleyâs Chain,â from her album Whatever (recorded with Neumannâs version of the M49, a U49). She uses vocal dynamics artfully to create the illusion she is practically whispering the song in your ear. Mixing engineer Bob Clearmountain added a great deal of compression to the vocal to even out the dynamics, so that loud and soft passages appear to be at the same volume, even as Aimee goes from very soft to very loud. Now imagine listening to a group and all of the instruments have been recorded with the microphones right on top of themâthis is called close miking and it is how most rock records are made. The listener experiences the ultimate in hyperrealistic perspectiveâhearing each instrument as though her ear was right up against it, all at the same time! This is equivalent to the rapid edits in a movie, except with albums, you, the listener, get to decide when to switch your attention from one instrument to another, or whether to take in the whole scene.
It is interesting to consider the cognitive differences between seeing and hearing. Because visual information is spread out across space and auditory information is spread out across time, the two sensory experiences are fundamentally different. When we shift attention from one visual stimulus to another, we have to move our eyes. To shift attention from one auditory stimulus to another, we donât move our ears; we simply focus our attention on a different aspect of the sound that is impinging on our eardrums. In a musical performance, we can concentrate on an individual instrument or on the whole (the Gestalt). In a visual performance, such as a movie, we can only have the equivalent degree of control if we are provided with multiple viewsâfor example, if the director splits the image up into several parts. Note also that in a movie, the director and cinematographer often use an assortment of lighting and image-composition tricks to get you to look at exactly the part of the screen they want you to, whether thatâs focusing on the face of a character whoâs making some significant expression or looking off to the side i...
Table of contents
- Cover
- Half Title
- Title Page
- Copyright Page
- Table of Contents
- Preface
- Introduction
- 1 Recording Music
- 2 Sound Synthesis
- 3 Voice Synthesis
- 4 Speech Processing
- 5 Applied Signal Processing
- 6 HRTF Spatialization
- 7 Synchronization
- 8 Music Composition
- 9 Human Experience
- Glossary of Audio Terms
- Contributor Biographies
- Index