Advanced Biomedical Image Analysis
eBook - ePub

Advanced Biomedical Image Analysis

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Advanced Biomedical Image Analysis

About this book

A comprehensive reference of cutting-edge advanced techniques for quantitative image processing and analysis

Medical diagnostics and intervention, and biomedical research rely progressively on imaging techniques, namely, the ability to capture, store, analyze, and display images at the organ, tissue, cellular, and molecular level. These tasks are supported by increasingly powerful computer methods to process and analyze images. This text serves as an authoritative resource and self-study guide explaining sophisticated techniques of quantitative image analysis, with a focus on biomedical applications. It offers both theory and practical examples for immediate application of the topics as well as for in-depth study.

Advanced Biomedical Image Analysis presents methods in the four major areas of image processing: image enhancement and restoration, image segmentation, image quantification and classification, and image visualization. In each instance, the theory, mathematical foundation, and basic description of an image processing operator is provided, as well as a discussion of performance features, advantages, and limitations. Key algorithms are provided in pseudo-code to help with implementation, and biomedical examples are included in each chapter.

Image registration, storage, transport, and compression are also covered, and there is a review of image analysis and visualization software.

Members of the academic community involved in image-related research as well as members of the professional R&D sector will rely on this volume.
It is also well suited as a textbook for graduate-level image processing classes in the computer science and engineering fields.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Advanced Biomedical Image Analysis by Mark Haidekker in PDF and/or ePUB format, as well as other popular books in Biological Sciences & Biotechnology. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Wiley
Year
2011
Print ISBN
9780470624586
eBook ISBN
9781118099483
CHAPTER 1
IMAGE ANALYSIS: A PERSPECTIVE
The history of biomedical imaging is comparatively short. In 1895, Wilhelm Conrad RĂśntgen discovered a new type of radiation, which he called the x-ray. The discovery caused a revolution in medicine, because for the first time it became possible to see inside the human body without surgery. Use of x-rays in medical centers spread rapidly, but despite their vast popularity, little progress was made for over half a century. Soon after the discovery of x-rays, materials were discovered that exhibited visible-light fluorescence when illuminated by x-rays. With such materials, the quantum efficiency of film-based x-ray imaging could be improved and the exposure of patients to radiation thus reduced. Contrast agents were introduced around 1906 to allow imaging of some soft tissues (namely, intestines), which show low x-ray contrast. For about six decades, x-ray tubes, film, and x-ray intensifying materials were improved incrementally, but no fundamental innovation was made.
After World War II, the next important development in biomedical imaging finally arrived—ultrasound imaging. The medical technology was derived from military technology: namely, sonar (sound navigation and ranging), which makes use of sound propagation in water. Applying the same principles to patients, sound echos made visible on oscilloscope-like cathode ray screens allowed views into a patient’s body without the use of ionizing radiation. The mere simplicity of creating sound waves and amplifying reflected sound made it possible to generate images with analog electronics—in the early stages with vacuum tubes. Electronic x-ray image intensifiers were a concurrent development. X-ray image intensifiers are electronic devices that are based on a conversion layer that emits electrons upon x-ray exposure. These electrons are collected and amplified, then directed on a luminescent phosphor. Here, the image is formed with visible light and can be picked up by a video camera. Electronic intensifiers made it possible to further reduce patient exposure to x-rays and speed up the imaging process to a point where real-time imaging became possible. At this time, video cameras could be used to record x-ray images and display them instantly on video screens. Interventional radiology and image-guided surgery became possible.
The next major steps in biomedical imaging required an independent development: the evolution of digital electronics and the microprocessor. Milestones were the invention of the transistor (1948),1 the integrated circuit as a prerequisite for miniaturization (1959), and the first single-chip microprocessor (1971).20 Related to these inventions was the first integrated-circuit random-access memory (RAM; 1970).62 Although the microprocessor itself was built on the principle of the programmable computer devised by Conrad Zuse in 1936, the miniaturization was instrumental in accumulating both computing power and memory in a reasonable space. Early digital computers used core memory, which got its name from small ferrite rings (cores) that could store 1 bit of information because of their magnetic remanence. Core memory was already a considerable achievement, with densities of up to 100 bits/cm2. Early RAM chips held 10 times the memory capacity on the same chip surface area. In addition, integrated-circuit RAM did away with one disadvantage of core memory: the fact that a core memory read operation destroyed the information in the ferrite rings. Consequently, read and write operations with integrated-circuit RAM were many times faster. For four decades, integration density, and with it both memory storage density and processing power, has grown exponentially, a phenomenon known as Moore’s law. Today’s memory chips easily hold 1 trillion bits per square centimeter.*
The evolution of digital electronic circuits and computers had a direct impact on computer imaging. Image processing is memory-intensive and requires a high degree of computational effort. With the growing availability of computers, methods were developed to process images digitally. Many fundamental operators15,18,24,32,36,43,64,72 were developed in the 1960s and 1970s. Most of these algorithms are in common use today, although memory restrictions at that time prevented widespread use. A medical image of moderate resolution (e.g., 256×256 pixels) posed a serious challenge for a mainframe computer with 4096 words of core memory, but today’s central processing units (CPUs) would effortlessly fit the same image in their built-in fast cache memory without even having to access the computer’s main memory. A convolution of the 256×256-pixel image with a 3×3 kernel requires almost 600,000 multiplications and the same number of additions. Computers in the 1970s were capable of executing on the order of 100,000 to 500,000 instructions per second (multiplication usually requires multiple instructions), and the convolution above would have cost several seconds of CPU time. On today’s computers, the same convolution operation would be completed within a few milliseconds.
The availability of early mainframe computers and minicomputers for data processing enabled new revolutionary imaging modalities. In 1917, mathematician J. Radon stipulated that a manifold can be represented (transformed) by an infinite number of line integrals.60 Almost 50 years later, when mainframe computers became widely accessible, A. M. Cormack developed an algorithm based on Radon’s idea,13,14 which in turn helped G. Hounsfield develop the computed tomography (CT) scanner.37 Cormack and Hounsfield shared a Nobel prize in 1979 for development of the CT scanner. In fact, CT was a completely new type of imaging modality because it requires computed data processing for image formation: The x-ray projections collected during a CT scan need to be reconstructed to yield a cross-sectional image, and the reconstruction step takes place with the help of a computer.42 Other imaging modalities, such as single-photon emission computed tomography (SPECT) and magnetic resonance imaging (MRI) also require the assistance of a computer for image formation.
Another important development in biomedical imaging resulted from the use of radioactively labeled markers. One such example is indium pentetreotide, a compound that acts as an analog for somatostatin and tends to accumulate in neuroendocrine tumors of the brain.69 Indium pentetreotide can be labeled with radioactive 111In, a gamma emitter. Another example is fluorodeoxyglucose, a glucose analog. Fluorodeoxyglucose accumulates at sites of high metabolic activity. When fluorodeoxyglucose is labeled with 18F, it becomes a positron emitter. Radiation emission becomes stronger near active sites where the radiolabeled markers accumulate, and with suitable devices, tomographic images of the concentration of the radioactive compounds can be gathered. The use of positron emitters that create gamma rays as a consequence of electron–positron annihilation events was proposed in 195178 and eventually led to positron emission tomography (PET).6 With radiolabeled physiologically active compounds (radiopharmaceuticals), it became possible to obtain images of physiological processes. These imaging methods not only improved the diagnosis of carcinomas, but also helped in our understanding of physiological processes, most notably brain activity. Functional imaging has become a key tool in medical diagnosis and research.
Subsequent research and development aimed at the improvement of image quality (e.g., improvement of resolution, better contrast, less noise). Current trends also include an increase in three-dimensional images and the involvement of computers in image processing and image analysis. A detailed overview of current trends is given in section 1.3.
1.1. MAIN BIOMEDICAL IMAGING MODALITIES
A number of fundamentally different methods of obtaining images from tissue, called imaging modalities, emerged during the historical development of biomedical imaging, and the information that these modalities provide differs among modalities. It is outside our scope here to provide a detailed description of the physical and engineering foundations of the modalities, but a short overview is provided for completeness.
X-ray Imaging X-ray imaging is a projection method. The patient is illuminated by x-rays, high-energy photons that penetrate the body. Some of the x-rays are absorbed in the tissue. X-rays predominantly follow a straight path. The absorption process can be described by the Lambert–Beer law:
(1.1)
where I is the x-ray intensity that passes through a patient’s body, I0 the incident x-ray intensity,
the x-ray absorption coefficient at any spatial location
, and the integration takes place along a straight line s, which intersects with the x-ray film at (x,y). At this location, the film is blackened by the x-rays, and the more x-rays that pass through the body, the higher the optical density of the film. At the end of this process, the film contains a two-dimensional distribution of optical density that relates to the tissue distribution inside the patient. If path s passes through bone, for example, the optical density at the end of that path is lower than that of a neighboring path, s′, that traverses only soft tissue. In the case of film-based x-ray imaging, the film needs to be digitized with a film scanner to obtain a digital image. Filmless x-ray imaging with a digital detector is becoming more common.
Computed Tomography Computed tomography (CT) is an x-ray-based imaging method used to obtain a two- or three-dimensional map of absorbers inside the imaged object. The principle behind CT is to collect many projections, following Equation (1.1), at various angles θ relative to the imaged object. One projection consists of measured attenuation values along parallel beams that are displaced a distance t from the center of rotation. When the incident beam intensity is known, the line integral along s can be represented by the computed attenuation p at detector position t and angle θ. Let us assume that the Fourier transform of the absorption map Ο(x,y) is M(u,v) = F {Ο(x,y)}, where the symbol F denotes the Fourier transform and u and v are the axes of the frequency-domain coordinate system (a detailed explanation is provided in Chapter 3). It can be shown that the one-dimensional Fourier transform of the projection with respect to t, F{p(t,θ)}, is identical to a one-dimensional cross section of the Fourier transform of the absorber map M(u,v) subtending an angle θ with the u-axis. This relationship is known as the Fourier slice theorem. In CT, the projections p(t,θ) are obtained during the scanning process, but the absorption map Ο(x,y) is unknown. The purpose of the scanning process is therefore to obtain many projection scans p(t,θ), to perform a Fourier transform, and to enter them at the angle θ into a placeholder M(u,v), thereby filling as many elements of M(u,v) as possible. The cross-sectional slice Ο(x,y) is then obtained by computing the inverse Fourier transform of M(u,v). Other reconstruction methods also exist (a comprehensive overview of CT reconstruction techniques is presented by Kak and Slaney42), as well as reconstruction algorithms for beams that are not parallel but fan- or cone-shaped. To obtain projections at different angles, a CT scanner contains an x-ray source and a detector array mounted on opposite sides of a large ring (the gantry). The patient is placed in the center of the ring and the source-detector system rotates around the patient, collecting projections. The patient can be moved in the axial direction on a patient tray. The patient tray not only allows patient positioning but also the acquisition of three-dimensional images.
Magnetic Resonance Imaging Magnetic resonance imaging (MRI) is another modality that requires the use of a computer for image formation. In a strong magnetic field, protons orient their spins along the magnetic field. The magnetic moments are not perfectly aligned, but rather, precess around the external field lines with an angular frequency that is proportional to the external field. The precession frequency is known as the Larmor frequency. With an externally introduced radio-frequency (RF) signal in resonance, that is, at the Larmor frequency, the orientation of the electron spins can be manipulated, but after cessation of the RF signal, the spins return to their original position. During this process, the spins emit a weak RF signal (echo) that can be picked up by an antenna. The time is takes for the spins to return to their original position depends on the tissue. Magnetic gradients allow us to change the precession frequency and precession phase angle along the spatial axes, and the spatial origin of a RF echo component can be reconstructed by Fourier analysis of the signal. In fact, the task of any MRI pulse sequence (i.e., the sequence of RF signals that manipulates spin precession) is to fill a frequency-domain placeholder, called a k-space matrix, with data. Inverse Fourier transform of the k-space matrix yields the cross-sectional image. Depending on the pulse sequence, different information can be obtained from the tissue. Three tissue constants are the relaxation times T1 and T2 and the proton density (water content). These tissue constants can vary strongly between different types of soft tissue, and for this reason, MRI provides excellent tissue–tissue contrast.
Ultrasound Imaging Ultrasound imaging makes use of the physics of sound propagation in tissue. Sound waves propagate at a certain, tissue-dependent velocity. At the interface between two tissues, some of the sound is reflected, and the sound echo can be picked up by a receiver. The round-trip time of the echo can be translated into the depth of the echo source because the speed of sound is known. An A-mode scan (the echo strength as a function of depth) is obtained by emitting a short burst of sound into the tissue and recording the echos for a short period of time. Sound generation and recording are carried out by transducers made of a piezoelectric material, that is, crystals that deform under the influence of an electric field and that generate an electrostatic field when deformed. An A-mode scan can be represented as a thin line on a screen where the intensity depends on the echo strength. By directing the incident sound wave in different directions, a B-mode scan can be obtained. A B-mode scan consists of several parallel or fan-shaped A-mode scans. It is also possible to record A-mode scans as a function of time, which is referred to as an M-mode (motion mode). Although ultrasound imaging could be performed with purely analog circuits, today’s ultrasound devices use digital signal and image processing. One disadvantage of ultrasound...

Table of contents

  1. Cover
  2. Half Title page
  3. Title page
  4. Copyright page
  5. Preface
  6. Chapter 1: Image Analysis: A Perspective
  7. Chapter 2: Survey of Fundamental Image Processing Operators
  8. Chapter 3: Image Processing in the Frequency Domain
  9. Chapter 4: Thewavelet Transform and Wavelet-Based Filtering
  10. Chapter 5: Adaptive Filtering
  11. Chapter 6: Deformable Models and Active Contours
  12. Chapter 7: The Hough Transform
  13. Chapter 8: Texture Analysis
  14. Chapter 9: Shape Analysis
  15. Chapter 10: Fractal Approaches to Image Analysis
  16. Chapter 11: Image Registration
  17. Chapter 12: Image Storage, Transport, and Compression
  18. Chapter 13: Image Visualization
  19. Chapter 14: Image Analysis and Visualization Software
  20. Appendix A: Design and Test of Simulations
  21. Appendix B: Parallel Discrete-Event Simulation
  22. Plates
  23. Index