This title concerns the use of a particle filter framework to track objects defined in high-dimensional state-spaces using high-dimensional observation spaces. Current tracking applications require us to consider complex models for objects (articulated objects, multiple objects, multiple fragments, etc.) as well as multiple kinds of information (multiple cameras, multiple modalities, etc.). This book presents some recent research that considers the main bottleneck of particle filtering frameworks (high dimensional state spaces) for tracking in such difficult conditions.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go. Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Tracking with Particle Filter for High-dimensional Observation and State Spaces by Séverine Dubuisson in PDF and/or ePUB format, as well as other popular books in Technology & Engineering & Signals & Signal Processing. We have over one million books available in our catalogue for you to explore.
The aim of this introductory chapter is to give a brief overview of the progress made over the last 20 years in visual tracking by particle filtering. To begin (section 1.2), we will present the theoretical elements necessary for understanding particle filtering. Thus, we will first introduce recursive Bayesian filtering, before giving the outline of particle filtering. For more details, in particular theorem demonstrations and convergence studies, we invite the reader to refer to more advanced studies [CHE 03b, DOU 00b, GOR 93]. We will then explain how particle filtering is used in visual tracking in video sequences. Although the literature is abundant on this subject and evolving very fast, it is impossible to give a complete overview of this subject. Next, section 1.3 presents certain limits of particle filtering. Toward the end, we specify our scientific position in section 1.4 and the methodological axes that allow a part of these problems to be solved. Finally, section 1.5 gives the current state of the main large families of approaches that are concerned with managing large-sized state and/or observation spaces in particle filtering.
1.2. Theoretical models
1.2.1. Recursive Bayesian filtering
Recursive Bayesian filtering [JAZ 70] aims to approximate the state of a hidden Markov process, which is observed through an observation equation. Let {x0:t} = {x0, . . . , xt} be this process, where xt is the state vector, yt the observation at instant t and the two models:
[1.1]
The first equation is the state equation, with the state transition function ft between the instants t – 1 and t, and the second is the observation equation, giving the measurement of the state through an observation function gt. ut and vt are independent white noises.
All information necessary for approximating x0:t is contained in the a posteriori density, also known as the filtering density, p(x0:t|y1:t), where y1:t = {y1,y2, . . . ,yt}, in which we can prove, by applying the definition of conditional probabilities, that it follows the following recursive equation for a known [CHE 03b] t ≥ 1 (p(x0):
[1.2]
Under the Markov hypothesis, p(yt|y1:t–1,x0:t) = p(yt|xt) (the observations at different instants are independent between themselves given the states and do not depend on the state at the current instant) and p(xt|x0:t–1, y1:t–1) = p(xt|xt–1) (the current state only depends on the previous state), equation [1.2] becomes:
[1.3]
The state transition equation is represented by the density p(xt|xt–1) and is linked to the ft function. This density is also called the transition function and gives the probably state xt at the instant t, given its previous state xt–1. The observation equation is represented by p(yt|xt) and is linked to the function gt. This density is also called the likelihood function and gives the probability of making the observation yt given the state xt. We can see that equation [1.3] is recursive and it decomposes into two primary stages that we detail below.
1) The first stage, known as prediction step, allows approximating the a posteriori density p(x0:t|y1:t–1) using the transition distribution p(xt|xt–1) and the previously approximated density p(x0:t–1|y1:t–1).
2) The second stage, called correction step, allows obtaining the a posteriori density p(x0:t|y1:t), using the likelihood distribution p(yt|xt), which depends on the new observation. This a posteriori density represents the density of the probability to have the set of states x0:t, among all the possible states, given the history of the observations y1:t.
In order to obtain calculable estimators of x0:t, we can use, for example, the conditional mean, given by:
[1.4]
where
is some bounded function. If the densities are Gaussian, then there exists a solution (analytical expression of the Gaussian parameters to approximate) given by the Kalman filter [KAL 60]. Otherwise, the whole of equation [1.4] is not calculable directly. We can invoke, under special conditions, the solutions given by the following types of methods:
– analytical (extended Kalman filter [JAZ 70], unscented Kalman filter [JUL 97]) that approach the law from a Gaussian sum and are better adapted to weakly nonlinear and unimodal cases, which is nonetheless not appropriate for most problems of vision;
– numerical (approximations by discrete tables, division into parts) that are, most of the time, complex to solve, not very flexible and only adapted to state spaces of a small size.
Most of the time, in vision, solutions are not adapted as the integrals are not directly calculable. For the general case (non-parametric and multi-modal densities), it is necessary to make use of numerical approximations, such as those provided by sequential Monte-Carlo methods, which we will present in the following section and that are the methodological heart of this work.
1.2.2. Sequential Monte-Carlo methods
Sequential Monte-Carlo methods, also known under the name of particle filters (PFs), were studied by many researchers at the beginning of the 1990s [GOR 93, MOR 95] and combine Monte-Carlo simulation and recursive Bayesian filtering. Today, they are widely used in the computer visualization community. Before detailing the principle of particle filtering, we need to introduce importance sampling.
1.2.2.1. Importance sampling
Once the a posteriori density defined by equation [1.3] has been approximated, we can evaluate the estimator given in equation [1.4]. The Monte-Carlo method allows us to approximate this integral with the realization of a random variable distributed according to the a posteriori density. Unfortunately, we are almost never able to sample following this law, so to solve this problem, we introduce a proposal function (or importance function) q(x0:t|y1:t), whose support contains p(x0:t|y1:t) and from which we can sample. The conditional mean is then given by:
[1.5]
With N realizations
~ q(x0:t|y1:t), i = 1, . . . , N, we can approximate the previous estimator by:
[1.6]
The law of large numbers allows us to show that this estimator almost certainly converges toward
(x0:t)] when N tends to infinity. Thus, we define the importance weights by
, whose expression requires the calculation of the integral (p(y1:t) =
p(y1:t|x0:t)p(x0:t)dx0:t), which is generally impossible. We can nevertheless show that the following equation is usable [DOU 01]:
[1.7]
This estimator almost certainly converges when N tends to infinity. Then, it is sufficient to make the importance sampling recursive, to obtain the particle filtering algorithm described below.
1.2.2.2. Particle filter
The idea is thus to represent and to approximate empirically the a posteriori density by a weighted sample of size N
, i = 1, … , N such that:
[1.8]
where the individuals
, also called particles, are the realizations of the random variable x0:t (state of the object) in the state space (δ being the Dirac function). Every particle is therefore a possible solution of the state to approximate and its associated weight represents its quality according to the available observations. Hence, the sample
at the instant t is calculated from the previous sample
, so as to obtain an approximation (via sampling) of the filtering density p(x0:t|y1:t) at the current instant. For this, three stages are necessary: i) a state exploration stage, during which we propagate the particles via the proposal function, ii) a stage for the evaluation (or the correction) of the particle quality, which aims to calculate their new weight and finally iii) an optional stage for particle selection (re-sampling). The generic particle filtering scheme (SIR filter – sequential importance resampling), between the instants t – 1 and t, is summarized in th...
Table of contents
Cover Page
Contents
Title Page
Copyright Page
Notations
Introduction
1. Visual Tracking by Particle Filtering
2. Data Representation Models
3. Tracking Models That Focus on the State Space
4. Models of Tracking by Decomposition of the State Space
5. Research Perspectives in Tracking and Managing Large Spaces