Computer Science

Sample Rate

Sample rate refers to the number of samples of audio or video taken per second. In digital audio, it determines the accuracy and quality of the sound. A higher sample rate captures more detail but requires more storage space and processing power. It is a critical parameter in digital signal processing and multimedia applications.

Written by Perlego with AI-assistance

10 Key excerpts on "Sample Rate"

  • Book cover image for: Basic Live Sound Reinforcement
    eBook - ePub

    Basic Live Sound Reinforcement

    A Practical Guide for Starting Live Audio

    • Raven Biederman, Penny Pattison(Authors)
    • 2013(Publication Date)
    • Routledge
      (Publisher)
    Bit depth indicates the number of bits (the amount of information) recorded for each sample as it is quantized. In both cases the higher the number the more original data goes into making the copy, and the better the sound of the final recording.

    Sampling Rate

    Sampling rate (or sampling frequency) refers to the frequency at which samples are taken per second; the greater the frequency the greater the number of samples taken, so higher sampling rates use more of the original data to create the copy). The two balancing factors that typically must be weighed when deciding upon a sampling rate have been quality of the recording and the storage space required.
    • Higher sampling rate preserves sound quality.
    • Lower sampling rate saves disk space (which is no longer much of an issue).
    Aside from these two considerations, there is a limit to how low our sampling rate can go in audio because of a phenomenon known as the Nyquist Theorem where the highest-frequency component (including partials and overtones) that can be captured with a given sampling rate is one-half that sampling rate. This frequency threshhold is called the critical frequency or Nyquist frequency. To ensure we capture the entire bandwidth of frequencies the human ear can perceive, an adequate audio sampling rate has to be at least twice as much as any frequency components in the signal that you’d like to capture. It is important to note that sample frequency is not the same as sound frequency, even though we must base Sample Rate on it.

    Aliasing

    As a result of the Nyquist Theorem a system sampling a waveform at a sampling rate of 20,000 kHz cannot reproduce frequencies above 10,000 Hz. When a frequency above the Nyquist frequency is sampled and played back, the frequencies don’t simply disappear. Instead recorded frequencies falling above the Nyquist frequency result in an audio distortion called aliasing
  • Book cover image for: DSP Software Development Techniques for Embedded and Real-Time Systems
    • Robert Oshana(Author)
    • 2006(Publication Date)
    • Newnes
      (Publisher)
    This theorem states that the highest frequency which can be represented accurately is one half of the sampling rate. The Nyquist rate specifies the minimum sampling rate that fully describes a given signal; in other words a sampling rate that enables the signal’s accurate reconstruction from the samples. In reality, the sampling rate required to reconstruct the original signal must be somewhat higher than the Nyquist rate, because of quantization errors 3 introduced by the sampling process. As an example, humans can detect or hear frequenies in the range of 20 Hz to 20,000 Hz. If we were to store sound, like music, to a CD, the audio signal must be sampled at a rate of at least 40,000 Hz to reproduce the 20,000 Hz signal. A standard CD is sampled at 44,100 times per second, or 44.1 kHz. The required sampling rate depends on the signal frequencies processed by the application. Radar signal processing sampling rates are on the order of one to several Gigahertz while video applications are sampling at or near 10 megahertz. Audio applications are sampled in the 40 to 60 kilohertz range and modeling applications such as weather or financial modeling systems are sampled at much lower rates, sometimes less than once per second. The Nyquist Criteria sets a lower bound for the sampling rate. In practice, algorithm complexity may set an upper bound. The more complex the algorithm, the more instruction cycles are required to compute the result, and the lower the sampling rate must be to accommodate the time for processing these complex algorithms. This is another reason why efficient algorithms must be designed and implemented in order to meet the required sampling rates to achieve the right application performance or resolution. Aliasing If an analog signal is not sampled at the minimum Nyquist rate, the data sampled will not accurately represent the true signal
  • Book cover image for: Theory and Design for Mechanical Measurements
    • Richard S. Figliola, Donald E. Beasley(Authors)
    • 2015(Publication Date)
    • Wiley
      (Publisher)
    7.2 Sampling Concepts 261 second) or Sample Rate (in Hz) of f s ¼ 1=dt ð7:1Þ For this discussion, we assume that the signal measurement occurs at a constant Sample Rate. For each measurement, the amplitude of the sine wave is converted into a number. For comparison, in Figures 7.2b–d the resulting series versus time plots are given when the signal is measured using sample time increments (or the equivalent Sample Rates) of (b) 0.010 second (f s ¼ 100 Hz), (c) 0.037 second (f s ¼ 27 Hz), and (d) 0.083 second (f s ¼ 12 Hz). We can see that the Sample Rate has a significant effect on our perception and reconstruction of a continuous analog signal in the time domain. As Sample Rate decreases, the amount of information per unit time describing the signal decreases. In Figures 7.2b,c we can still discern the 10-Hz frequency content of the original signal. But we see in Figure 7.2d that an interesting phenomenon occurs if the Sample Rate becomes too slow: the sine wave appears to be of a lower frequency. We can conclude that the sample time increment or the corresponding Sample Rate plays a significant role in signal frequency representation. The sampling theorem states that to reconstruct 1.0 0.8 0.6 0.4 0.2 0.0 Time (s) (c) f = 27 Hz s Amplitude (V) –2 –1 0 1 2 1.0 0.8 0.6 0.4 0.2 0.0 Time (s) (a) Original 10-Hz sine wave analog signal Amplitude (V) –2 –1 1 2 1.0 0.8 0.6 0.4 0.2 0.0 Time (s) (d) f = 12 Hz s –2 –1 0 1 2 1.0 0.8 0.6 0.4 0.2 0.0 Time (s) (b) f = 100 Hz s –2 –1 0 1 2 0 Figure 7.2 The effect of Sample Rate on signal frequency and amplitude interpretation. 262 Chapter 7 Sampling, Digital Devices, and Data Acquisition the frequency content of a measured signal accurately, the Sample Rate must be more than twice the highest frequency contained in the measured signal.
  • Book cover image for: Sound and Recording
    eBook - ePub

    Sound and Recording

    Applications and Theory

    • Francis Rumsey(Author)
    • 2021(Publication Date)
    • Routledge
      (Publisher)
    Arguments for the adoption of higher sampling frequencies have been widely made, quoting evidence from sources claiming that information above 20 kHz is important for higher sound quality, or at least that the avoidance of steep filtering is a good thing. AES5-2018 (a revision of the AES standard on sampling frequencies) allows 96 kHz as an optional rate for applications in which the audio bandwidth exceeds 20 kHz or where relaxation of the anti-alias filtering region is desired. Doubling the sampling frequency leads to a doubling in the overall data rate of a digital audio system and a consequent halving in storage time per megabyte. It also means that any signal processing algorithms need to process twice the amount of data and alter their algorithms accordingly. It follows that these higher sampling rates should be used only after careful consideration of the merits.
    Low sampling frequencies such as those below 30 kHz are sometimes encountered for lower quality sound applications such as the storage and transmission of speech and the generation of computer sound effects. Multimedia applications may need to support these rates because such applications often involve the incorporation of sounds of different qualities. There are also low sampling frequency options for data reduction codecs, as discussed in Chapter 9 .
    At conversion stages, the stability of timing of the sampling clock is crucial, because if it is unstable, the audio signal will contain modulation artifacts that give rise to increased distortions and noise of various kinds. This so-called clock jitter (see Fact File 5.6 ) is one of the biggest factors affecting sound quality in converters, and high-quality external converters usually have much lower jitter than the internal converters used on PC sound cards.

    Fact File 5.6 Jitter

    Jitter is the term used to describe clock speed or sample timing variations in digital audio systems and can give rise to effects of a similar technical nature to wow and flutter in analog systems, but with a different spectral spread and character. It typically only affects sound quality when it interferes with the A/D or D/A conversion process. Because of the typical frequency and temporal characteristics of jitter, it tends to manifest itself as a rise in the noise floor or distortion content of the digital signal, leading to a less ‘clean’ sound when jitter is high. If an A/D converter suffers from jitter, there is no way to remove the distortion it creates from the digital signal subsequently, so it pays to use converters with very low jitter specifications.
  • Book cover image for: Audio Engineering Explained
    • Douglas Self(Author)
    • 2012(Publication Date)
    • Routledge
      (Publisher)
    There are cost-effective A/D converters that can shape the quantization noise and produce a highquality signal. Sigma-Delta converters, or noise-shaping converters, use an oversampling technique to reduce the amount of quantization noise in the signal by spreading the fixed quantization noise over a bandwidth much larger than the signal band (Aziz et al., 1996). The technique of oversampling and noise shaping allows the use of relatively imprecise analog circuits to perform high-resolution conversion. Most digital audio products on the market use these types of converters.

    Sample Rate selection

    The sampling rate, 1/T , plays an important role in determining the bandwidth of the digitized signal. If the analog signal is not sampled often enough, then high-frequency information will be lost. At the other extreme, if the signal is sampled too often, there may be more information than is needed for the application, causing unnecessary computation and adding unnecessary expense to the system.
    In audio applications it is common to have a sampling frequency of 48 kHz = 48,000 Hz, which yields a sampling period of 1/48,000 = 20.83 µs. Using a Sample Rate of 48 kHz is why, in many product data sheets, the amount of delay that can be added to a signal is an integer multiple of 20.83 µs.
    The choice of which Sample Rate to use depends on the application and the desired system cost. Highquality audio processing would require a high Sample Rate while low bandwidth telephony applications require a much lower Sample Rate. A table of common applications and their Sample Rate and bandwidths are shown in Table 15.3. As shown in the sampling process, the maximum bandwidth will always be less than 1/2 the sampling frequency. In practice, the antialiasing filter will have some roll-off and will band-limit the signal to less than 1/2 the Sample Rate. This band-limiting will further reduce the bandwidth, so the final bandwidth of the audio signal will be a function of the filters implemented in the specific A/D and the Sample Rate of the system.

    ALGORITHM DEVELOPMENT

    Once a signal is digitized, the next step in a DSP system is to process the signal. The system designer will begin the design process with some goal in mind and will use the algorithm development phase to develop the necessary steps (i.e., the algorithm) for achieving the goal.
  • Book cover image for: The Computer Engineering Handbook
    • Vojin G. Oklobdzija(Author)
    • 2019(Publication Date)
    • CRC Press
      (Publisher)
    Pitch is a musical term that relates to the frequency of a note. A higher pitch corresponds to a higher frequency. The Sample Rate converter performs a double duty as a pitch shifter, enabling a single-recorded note to reproduce many notes on the instrument. This provides effective data compression by reducing the number of recordings required to reproduce the sound of an instrument. In addition, it enables a variety of musical effects, such as vibrato, pitch bend, and portamento. Finally, the pitch shifting effect of the Sample Rate converter emulates the Doppler effect needed by 3-D environmental audio. Thus, the Sample Rate converter is a fundamental building block used by nearly all facets of the digital audio system in the PC. 23.3.4.2.1 Sample Rate Converters Sample Rate converters come in several varieties, offering different levels of conversion quality. Higher quality conversion requires more computation, and comes at a correspondingly higher cost. Drop-sample converters require almost no computation to implement and offer the lowest quality. Linear interpolation converters require more computation and offer reasonably good quality, especially for downward pitch shift. Multipoint interpolation converters require the most computation and memory bandwidth, but provide the highest quality; however, there can be considerable variation in the quality of multipoint interpolation converters. To understand Sample Rate conversion, it is necessary to understand discrete-time sampling theory as described by Nyquist and Shannon [5,6]. To sample a signal properly, the Sample Rate must be at least twice the highest frequency component in the signal. The Nyquist frequency is one-half the Sample Rate, and indicates the highest frequency component that a particular Sample Rate can represent. Sampling of frequency components above the Nyquist frequency results in aliases in the sampled waveform that are not present in the original signal.
  • Book cover image for: Digital Signals Theory
    44100 Hz for compact disc quality audio, and which is still commonly used today.

    2.4 QUANTIZATION

    If you recall the introduction to this chapter, we saw that digitized audio has two properties: the sampling rate (already covered in this chapter), and the precision. This section concerns the precision of digital audio, but what exactly does this mean? To understand this, we'll first need to take a detour to see how computers represent numerical data.

    2.4.1 Background: digital computers and integers

    These days, most humans use the Hindu-Arabic (or decimal) numeral system to represent numbers. With decimal digits, we use the ten symbols
    0 , 1 , 2 , , 9
    to encode numbers as combinations of powers of ten (the base or radix of the system). For example, a number like 132 can be expanded out in terms of powers of 10:
    132 =
    1
    10 2
    +
    3
    10 1
    +
    2
    10 0
    .
    There's nothing magical about the number 10 here: it was probably chosen to match up with the number of fingers (ahem, digits) most people possess. Any other base can work too.
    Of course, computers don't have fingers, so they might find decimal to be difficult. Computers do have logic gates though, which can represent true and false values, which we can interpret as 1 and 0, respectively. This leads us to binary numbers, which only use two symbols to encode numbers as combinations of powers of 2, rather than combinations of powers of 10.
    In our example above, the number 132 could be represented as
    132
    = 1 128 + 0 64 + 0 32 + 0 16 + 0 8 + 1 4 + 0 2 + 0 1
    = 1
    2 7
    + 0
    2 6
    + 0
    2 5
    + 0
    2 4
    + 0
    2 3
    + 1
    2 2
    + 0
    2 1
    + 0
    2 0
    ,
    or, more compactly, as 100001002 (where the subscript lets us know we're in binary). We refer to each position as a bit (short for binary digit
  • Book cover image for: Data Acquisition
    eBook - PDF
    • Michele Vadursi(Author)
    • 2010(Publication Date)
    • IntechOpen
      (Publisher)
    2 Bandpass Sampling for Data Acquisition Systems Leopoldo Angrisani 1 and Michele Vadursi 2 1 University of Naples Federico II, Department of Computer Science and Control Systems 2 University of Naples “Parthenope”, Department of Technologies Italy 1. Introduction A number of modern measurement instruments employed in different application fields consist of an analogue front-end, a data acquisition section, and a processing section. A key role is played by the data acquisition section, which is mandated to the digitization of the input signal, according to a specific Sample Rate (Corcoran, 1999). The choice of the Sample Rate is connected to the optimal use of the resources of the data acquisition system (DAS). This is particularly true for modern communication systems, which operate at very high frequencies. The higher the Sample Rate, in fact, the shorter the observation interval and, consequently, the worse the frequency resolution allowed by the DAS memory buffer. So, the Sample Rate has to be chosen high enough to avoid aliasing, but at the same time, an unnecessarily high Sample Rate does not allow for an optimal exploitation of the DAS resources. As well known, the Sample Rate must be correctly chosen to avoid aliasing, which can seriously affect the accuracy of measurement results. The sampling theorem, in fact, affirms that a band-limited signal can be alias-free sampled at a rate f s greater than twice its highest frequency f max (Shannon, 1949). As regards bandpass signals, which are characterized by a low ratio of bandwidth to carrier frequency and are peculiar to many digital communication systems, a much less strict condition applies. In particular, bandpass signals can be alias-free sampled at a rate f s greater than twice their bandwidth B (Kohlenberg, 1953). It is worth noting, however, that this is only a necessary condition.
  • Book cover image for: Introduction To Digital Signal Processing: Computer Musically Speaking
    Ironically the first CD title that CBS/Sony produced was not Beethoven’s 9th but Billy Joel’s 52nd 20 Introduction to Digital Signal Processing Street and the rest, as they say, is history. Nowadays we often see 24 bits as a standard way to store the amplitude values and sampling rates at 96 kHz and even up to 192 kHz. There is much discussion going on regarding the necessity for 192 kHz sampling rates and many arguments seemingly are in favor and some against such necessities. We’ll leave it at that since this topic itself would require another whole chapter or two at least to address the technical and non-technical issues involved. 4.2 Aliasing At this point you should be asking yourself — what if I use a sampling rate that is lower than twice the maximum frequency component of the audio signal. That is, when f s / 2 < f max ? The answer to that is that you will experience a particular digital artifact called aliasing . The resulting digital artifact is quite interesting and presents itself as a distorted frequency component not part of the original signal. This artifact caused by under-sampling may sometimes be intriguing and perhaps useful in the context of musical composition, but when the objective is to acquire an accurate representation of the analog audio signal in the digital domain via sampling, this will be an undesirable artifact. We will see in more detail how aliasing occurs in Chap. 8 after learning about the frequency-domain and the Fourier transforms. For now, let’s try to grasp the idea from a more intuitive time-domain approach. Let’s consider a 1 Hz sine wave sampled at f s = 100 Hz (100 samples per second) as shown in Fig. 4.5. We know that a 1 Hz sine wave by itself sampled at f s = 100 Hz meets the sampling theorem criteria, the Nyquist limit being at 50 Hz — any sine wave that is below 50 Hz can be unambiguously represented in the digital domain. So there should be no aliasing artifacts in this particular case ( f = 1 Hz).
  • Book cover image for: Multimedia Computing
    Many cameras and microphones acquire an analog signal, which is then sampled and quantized to convert it to digital. The sam-pling rate determines how many samples the digital signal will have (called either image resolution or acoustic sampling rate ), and quantization determines how many intensity levels will be used to represent the intensity value at each sample. Chapters 5 and 6 , as well as visual and acoustic processing books (see recommended reading section), discuss the factors that should be considered when selecting appropriate sampling and quantization rates to retain the important information in digitized files. In most modern cameras, the sampling and quantizing rates are predetermined and specified as the number of pixels (as MP, or million pixels) available on the chip used in the camera. In this case, the images are directly acquired in digital form. 210 Signal Processing Primer Acoustic and visual signal samples over time are often conceptualized in what is called time domain : the x -axis of a graph showing a signal this way would be time. In the fre-quency domain, every point of the x -axis represents a certain frequency. Visualized as a graph, the diagram would show how the signal varies over frequency. Typically, just like in time domain, the y -axis would show the energy or amplitude of the signal. LEVELS OF COMPUTATION An image, video, or sound file usually contains several objects. A vision application, for example, usually involves computing certain properties of an object, not the image as a whole. To compute properties of an object, individual objects must first be identified as separate objects; then object properties can be computed by applying calculations to the separate objects. For considering computational aspects of audio, visual, and multime-dia processing algorithms, it helps to consider each algorithm in terms of its input-out-put characteristics.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.