Physics

Image Formation by Lenses

Image formation by lenses occurs when light rays pass through a lens and converge or diverge to create a real or virtual image. Lenses can be convex or concave, and the type of image formed depends on the object's distance from the lens and the lens's focal length. Convex lenses produce real or virtual images, while concave lenses produce only virtual images.

Written by Perlego with AI-assistance

7 Key excerpts on "Image Formation by Lenses"

  • Book cover image for: Optics
    eBook - PDF

    Optics

    Principles and Applications

    • Kailash K. Sharma(Author)
    • 2006(Publication Date)
    • Academic Press
      (Publisher)
    C H A P T E R 11 Image Formation and Optical Processing 11.1 INTRODUCTION In the paraxial approximation of geometrical optics, a lens forms a point image of a point object and a line image of a line object (Fig. 11.1). Spherical wave-fronts emanating from different points of the object carry with them complete information on the amplitude and phase distributions of the object field. A lens bends and converges these wavefronts to form the image (Fig. 5.1). Optical path lengths of all paths between an object point and its image are exactly equal. Constructive interference among the arriving waves reinforces the field at the image point. The field distribution in the object plane and geometry of the imaging configuration determine the brightness of the image. The location and magnification of the image can be obtained from 1 v − 1 u = 1 f (4.51b) and m = h h = v u (4.52a) where u and v are the object and image distances from the lens, respectively. Under ideal conditions, the image field distribution Ex y of a two-dimensional object (Fig. 11.2) is described, except for the image magnification, by the object field distribution function E O x y , i.e., Ex y = 1 m E O x = x m y = y m (11.1) 459 460 Chapter 11: IMAGE FORMATION AND OPTICAL PROCESSING O ’ h ’ h O L v u Fig. 11.1: Paraxial image formed by a lens. y ’ y P o Object plane Image plane x ’ x L z P Fig. 11.2: Imaging two-dimensional object by a lens. This equation relates the field at point x y in the image plane to the field at the conjugate point x y in the object plane. Equation (11.1) neglects losses in the process of image formation. A lens with large aperture minimizes the losses and diffraction effects in image formation. But a lens with large aperture causes image degradation as described in Chapter 5. In fact, to reduce image aberrations, aperture stops are often used.
  • Book cover image for: Introductory Biomedical Imaging
    eBook - PDF

    Introductory Biomedical Imaging

    Principles and Practice from Microscopy to MRI

    • Bethe A. Scalettar, James R. Abney(Authors)
    • 2022(Publication Date)
    • CRC Press
      (Publisher)
    Section I Microscopy 27 DOI: 10.1201/b22076-4 3 Introduction to Image Formation by the Optical Microscope The lens is the heart of a microscope. We therefore begin our study of microscopy with a conceptual dis- cussion of how lenses form images by appropriately reconfiguring, or reshaping, a wavefront. For the moment, we will treat the topic by considering the rectilinear propagation of rays in homogeneous media and the bending of rays at interfaces, while neglecting the effects of diffraction and interference. This is the domain of geometrical optics, which is valid when the wavelength of EM radiation is negligible. Geometrical optics will allow us to map an object into an image and predict key image properties, such as location and magnification. Wave optics, which describes interference and diffraction, will allow us to analyze image quality, such as resolution and con- trast. Wave optics is discussed in Chapters 4 and 5. 3.1 Lens Function Consider first an object, like the cat face in Fig. 3.1 , that is viewed in reflected light. The object serves as a col- lection of scattering centers that generate diverging spherical waves with energy propagating along the direction of the rays. The purpose of a lens is to cap- ture part of the wave that diverges from each source point and cause it to converge at a corresponding point to form an image of the object. To accomplish this goal, diverging spherical waves are converted into converging spherical waves. There are two key ways to understand how a converging lens achieves this purpose. The first explanation is reshaping of the wavefront caused by lens-induced changes in wave speed. As shown in Fig. 3.2, a con- verging (convex) lens, which bulges in the middle, first reshapes a diverging spherical wave by inter- cepting and slowing the central, leading edge of the wave before intercepting and slowing the edges. As a consequence, the divergence of the spherical wave- front is reduced.
  • Book cover image for: Practical Handbook on Image Processing for Scientific and Technical Applications
    • Bernd Jahne(Author)
    • 2004(Publication Date)
    • CRC Press
      (Publisher)
    4 Image Formation 4.1 Highlights Image formation includes geometric and radiometric aspects. The geometry of imaging requires the use of several coordinate systems (Section 4.3.1): world coordinates attached to the observed scene, camera coordinates aligned to the optical axis of the optical system (Section 4.3.1a), image coordinates related to the position on the image plane (Section 4.3.1b), and pixel coordinates attached to an array of sensor elements (Section 4.3.2b). The basic geometry of an optical system is a perspective projection. A pinhole camera (Section 4.3.2a) models the imaging geometry adequately. A perfect opti-cal system is entirely described by the principal points, the aperture stop, and the focal length (Section 4.3.2c). For the setup of an optical system it is important to determine the distance range that can be imaged without noticeable blur (depth of field, Section 4.3.2d) and to learn the difference between normal, telecentric, and hypercentric optical systems (Section 4.3.2e). The deviations of a real optical system from a perfect one are known as lens aberrations and include spherical aberration, coma, astigmatism, field curvature, distortion, and axial and lateral color (Section 4.3.2f). Wave optics describes the diffraction and interference effects caused by the wave nature of electromagnetic radiation (Section 4.3.3). Essentially, the radiance at the image plane is the Fourier transform of the spatial radiance distribution of a parallel wave front at the lens aperture and the crucial parameter of the optical system is the numerical aperture. Central to the understanding of the radiometry of imaging is the invariance prop-erty of the radiance (Section 4.3.4a). This basic property makes it easy to compute the irradiance at the image plane. Linear system theory is a general powerful concept to describe the overall perfor-mance of optical systems (Section 4.3.5).
  • Book cover image for: Optical Imaging and Aberrations, Part II. Wave Diffraction Optics, Second Edition
    3 Chapter 1 Image Formation 1.1 INTRODUCTION In Part I of Optical Imaging and Aberrations , 1 we showed how to determine the location and size of the image of an object formed by an imaging system in terms of the location and size of the object and certain parameters of the system. We discussed the relationship between the irradiance distribution of the image and the radiance distribution of the object, including the cosine-to-the-fourth-power dependence on the field angle and vignetting of the rays by the system. We discussed the ray distribution of the aberrated image of a point object, called the geometrical point-spread function (PSF) or the spot diagram. We showed how to design and analyze imaging systems in terms of their primary aberrations. We also showed how to calculate the primary aberrations of a multisurface optical system in terms of their radii of curvature, spacings between them, and the refractive indices associated with those spacings. We pointed out that the image obtained in practice differs from that predicted by geometrical optics. For example, when the system is aberration free, all of the rays from a point object transmitted by the system converge to its Gaussian image point according to geometrical optics. In reality, however, the image obtained is not a point. Because of diffraction of object radiation at the aperture stop of the system or, equivalently, at its exit pupil, the actual aberration-free image for a circular exit pupil is a light patch surrounded by dark and bright diffraction rings. The determination of the characteristics of the diffraction image of an object formed by an aberrated system is the subject of Part II. In this chapter, we first describe the diffraction theory of image formation of incoherent objects , i.e., objects for which the radiation from one of its parts is incoherent with that from another.
  • Book cover image for: From Photon to Neuron
    eBook - PDF

    From Photon to Neuron

    Light, Imaging, Vision

    They travel in straight lines and reflect or refract following the appropriate laws that we have previously found. The ray picture neglects diffraction. Nevertheless, it does help us to visualize roughly how light behaves in compound lens systems (Figure 6.9), and in other more complex contexts than those considered so far. 6.5.2 Real and virtual images Conventional microscopy, as developed by Robert Hooke in the 17th century, involves illuminating a specimen, collecting the scattered light, and forming a magnified image. In the apparatus sketched in Figure 6.9, each ray bends at each air-glass interface according to the law of refraction. If there is a point at which many rays from the same part of the specimen reconverge, we say that the rays “focus” there. If we find a plane consisting of focal points for each point in a thin specimen (between the two lenses in the figure), we say that a real image is formed there. Up to now, this chapter has dealt exclusively with the formation of real images. 20 Ultimately, a microscope must form a real image, either in our eye or in a camera. However, there is a second useful notion of image, distinct from this one. When we look through the compound lens system in Figure 6.9, our eye accommodates to bring the real image into focus. Because our brain expects light to travel on straight lines, it interprets the image as having come from an object located at a distance that would have been in focus with that much accommodation, had there been no microscope. The imagined object at this location is called the virtual image of the specimen. 6.5.3 Spherical aberration We have seen that the key to focusing light is to introduce an element that adds a time delay to each photon path that passes through it.
  • Book cover image for: Building Earth Observation Cameras
    • George Joseph(Author)
    • 2015(Publication Date)
    • CRC Press
      (Publisher)
    9 2 Image Formation 2.1 Introduction An image is a projection of a three-dimensional scene in the object space to a two-dimensional plane in the image space. An ideal imaging system should map every point in the object space to a defined point in the image plane, keeping the relative distances between the points in the image plane the same as those in the object space. An extended object can be regarded as an array of point sources. The image so formed should be a faithful reproduc- tion of the features (size, location, orientation, etc.) of the targets in the object space to the image space, except for a reduction in the size; that is, the image should have geometric fidelity. The imaging optics does this transformation from object space to image space. The three basic conditions that an imaging system should satisfy to have a geometrically perfect image are (Wetherell 1980) as follows: 1. All rays from an object point ( x, y) that traverse through the imaging system should pass through the image point ( x’, y’). That is, all rays from an object point converge precisely to a point in the image plane. The imaging is then said to be stigmatic. 2. Every element in the object space that lies on a plane normal to the optical axis must be imaged as an element on a plane normal to the optical axis in the image space. This implies that an object that lies in a plane normal to the optical axis will be imaged on a plane normal to the optical axis in the image space. 3. The image height h must be a constant multiple of the object height, no matter where the object (x, y) is located in the object plane. The violation of the first condition causes image degradations, which are termed aberrations. The violation of the second condition produces field curvature, and the violation of the third condition introduces distortions. Consequences of these deviations will be explained in Section 2.4.
  • Book cover image for: Computer Techniques for Image Processing in Electron Microscopy
    • W. O. Saxton, L. Marton, Claire Marton(Authors)
    • 2013(Publication Date)
    • Academic Press
      (Publisher)
    1 Image Formation Theory 1.1 ELECTRON OPTICS W. Hoppe has referred to the interference patterns known as micro-graphs; this introductory chapter gives a brief account of the considerations reflected in this remark. We begin not at the beginning but in the middle, outlining first the propagation of electron waves through the optical system of the microscope and turning later to how the object information is imposed on the beam and how it may be extracted from it. Electron optics is an interest-ing blend of classical (relativistic) dynamics, wave mechanics, and light optics. A comprehensive and authoritative account of the fundamental theory is given by Glaser (1952) and also, in abbreviated form, by Glaser (1956); the transition to the Fourier theory is concisely expressed by Hawkes (1973b). The principal elements in the optical systems of electron microscopes are magnetic lenses with rotational symmetry; the symmetry property means that the field is determined everywhere by its value on the axis, B(z). If the electrons have been accelerated through a potential Φ, the equations of motion can be conveniently separated in the paraxial approximation (all trajectories near and at small angles to the axis) by the introduction of a rotating Cartesian coordinate system (x, y 9 z), the angle of rotation vary-ing with z according to άθ/dz = B(e/SmV) l/ (1.1.1) V being the relativistically corrected accelerating voltage Φ(1 -h βΦ/Ιηιο 2 ). In this system, the x and y equations both take the form x + (e/SmV)B 2 x = 0 (1.1.2) (prime denoting differentiation with respect to z). The solutions of this are obviously oscillatory, so that all lenses of this type are converging. More important is the fact that the equation is linear. Choosing an arbitrary object plane z 0 , we choose the two fundamental solutions to be g(z) and h(z satisfying g(z 0 ) = hz 0 ) = 1, gz Q ) = h(z 0 ) = 0. 1 (1.1.3)
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.