Technology & Engineering

Interpolation

Interpolation is a method used to estimate values between known data points. In the context of technology and engineering, it is commonly used in computer graphics, signal processing, and numerical analysis to create smooth curves or surfaces from discrete data points. By filling in the gaps between known points, interpolation helps to provide a more complete and continuous representation of the data.

Written by Perlego with AI-assistance

3 Key excerpts on "Interpolation"

  • Book cover image for: Digital Image Interpolation in Matlab
    • Chi-Wah Kok, Wing-Shan Tam(Authors)
    • 2018(Publication Date)
    • Wiley-IEEE Press
      (Publisher)
    4 Nonadaptive Interpolation
    There are a number of treatments for the Interpolation process in literature. In this book, we adopt the oldest and most widely accepted definition for Interpolation that you can find in modern science:
    to insert (an intermediate term) into a series by estimating or calculating it from surrounding known values
    Such a definition of “Interpolation” can be found in the Oxford dictionary and can be fulfilled using model‐based signal processing technique. As a result, in digital signal processing, Interpolation is also referred to as model‐based recovery of continuous data from discrete samples within a given range of abscissa [61 ]. In the context of digital image, Interpolation is further refined to describe the process that estimates the grayscale values of pixels in a particular high density sampling grid with a given set of pixels at a less dense grid, such that the interpolated image is close to the actual analog image sampled with the high density grid. We have discussed the meaning of having two images being “close,” where the closeness can be measured by any objective and subjective metrics as described in Chapter . An example of image Interpolation method known as nearest neighbor has been presented in Section 2.7.3 , which follows the algorithm shown in Figure 4.1 to generate an image with twice the size (sampling grid with twice the density) as that of the original image. The size expansion is achieved by first increasing the sampling density, such that every block in the resolution‐expanded image is considered as a local block (with one known pixel at the upper left corner and the rest of pixels with unknown values). The nearest neighbor Interpolation method fills up the unknown pixels by replicating the pixels with known values in the local block as shown in Figure 2.26c. This pixel replication method usually results in poor image quality for most natural images, where both peak signal‐to‐noise ratio (PSNR ) and Structural SIMilarity (SSIM
  • Book cover image for: Numerical Analysis for Engineers
    eBook - ePub

    Numerical Analysis for Engineers

    Methods and Applications, Second Edition

    • Bilal Ayyub, Richard H. McCuen, Bilal M. Ayyub(Authors)
    • 2015(Publication Date)
    6

    Numerical Interpolation

    6.1  Introduction

    Problems requiring Interpolation between individual data points occur frequently in science and engineering. For example, the design of a computerized energy control system for a building may require the typical temperature variation occurring in the building each day as input data. Sample temperature values would be measured in the building at discrete time points. However, the computer program for the energy control system may require temperature at times other than those at which the sample measurements were taken, such as on an hourly increment. One way to overcome this problem is to have the program fit a curve to the measured temperature points and interpolate for values between the sample measurements.
    Interpolation is required in many engineering applications that use tabular data as input. The basis of all Interpolation algorithms is the fitting of some type of curve or function to a subset of the tabular data; linear Interpolation uses a straight line. Interpolation algorithms differ in the form of their Interpolation functions.

    6.2  Method of Undetermined Coefficients

    The method of undetermined coefficients is conceptually the simplest Interpolation algorithm, and it illustrates many of the key points that hold for all Interpolation schemes. In this method, an n th-order polynomial is used as the Interpolation function, f (x ):
    (6.1)
    f ( x ) =
    b 0
    +
    b 1
    x +
    b 2
    x 2
    +
    b 3
    x 3
    + ... +
    b n
    x n
    The constants in Equation 6.1, b 0 , b 1 , b 2 , b 3 , …,
    bn
    , are determined using the measured data points.
    As an example of the method of undetermined coefficients, consider the easiest case: linear Interpolation—that is, an Interpolation function consisting of a straight line. Table 6.1 contains numbers and their cubes for the purpose of illustration.
    TABLE 6.1   Data for Linear Interpolation
    x
    x 3
    0   0
    1   1
    2   8
    3 27
    4 64
    The method of undetermined coefficients is used to estimate the cube of 2.2. Of course, in this case, the answer can be calculated exactly, if we desired; however, linear Interpolation is used instead to illustrate the general concepts. By truncating all except the first two terms of Equation 6.1, the linear Interpolation function has the form
  • Book cover image for: Encyclopedia of Image Processing
    • Phillip A. Laplante(Author)
    • 2018(Publication Date)
    • CRC Press
      (Publisher)
    Under the context of digital communication and signal processing, Interpolation was studied by C. Shannon, H. Nyquist, and J. M. Whittaker in their famous sampling theorem for band-limited signals. While in the field of numerical analysis, Interpolation was studied by Schoenberg under the disguise of approximation problem, [ 13 ] and the concept of spline was developed for Interpolation purpose. [ 14 ] The connections between these two lines of research can be found in a tutorial review [ 15 ] where higher order kernels are found to possess considerably better low-pass properties than linear interpolators. In the 1970s, the field of digital image processing started to develop, so did image Interpolation. One of the first applications related to image Interpolation was the geometrical rectification of digital images obtained from the first Earth Resources Technology Satellite launched in 1972. Cubic convolution-based Interpolation was based on a sinc-like kernel composed of piecewise cubic polynomials. Its two-dimensional extension—bicubic Interpolation—is still of wide use nowadays. A more sophisticated Interpolation technique based on cubic B-spline was developed in 1978 [ 3 ] and has remained influential. A comparative study among various competing Interpolation techniques was conducted by Parker et al. [ 16 ] in 1983. In the late 1980s and the early 1990s, information technology experienced an era of booming, which brought about the revolution of communication and computing. As digital images become ubiquitous in various multimedia applications, there has been a growing interest in obtaining more fundamental understanding of image signals and developing more powerful image processing algorithms. The importance of spatial adaptation has been long recognized for the problem of image Interpolation, but there still lacked a principled approach toward spatially adaptive Interpolation
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.