Particle Swarm Optimization
eBook - ePub

Particle Swarm Optimization

Maurice Clerc

Condividi libro
  1. English
  2. ePUB (disponibile sull'app)
  3. Disponibile su iOS e Android
eBook - ePub

Particle Swarm Optimization

Maurice Clerc

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

This is the first book devoted entirely to Particle Swarm Optimization (PSO), which is a non-specific algorithm, similar to evolutionary algorithms, such as taboo search and ant colonies.

Since its original development in 1995, PSO has mainly been applied to continuous-discrete heterogeneous strongly non-linear numerical optimization and it is thus used almost everywhere in the world. Its convergence rate also makes it a preferred tool in dynamic optimization.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Particle Swarm Optimization è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Particle Swarm Optimization di Maurice Clerc in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Computer Science e Artificial Intelligence (AI) & Semantics. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Editore
Wiley-ISTE
Anno
2013
ISBN
9781118613979

PART I
Particle Swarm Optimization

Chapter 1

What is a Difficult Problem?

1.1. An intrinsic definition

As regards optimization, certain problems are regarded as more difficult than others. This is the case, inter alia, for combinatorial problems. But what does that mean? Why should a combinatorial problem necessarily be more difficult than a problem in continuous variables and, if this is the case, to what extent is it so? Moreover, the concept of difficulty is very often more or less implicitly related to the degree of sophistication of the algorithms in a particular research field: if one cannot solve a particular problem, or it takes a considerable time to do so, therefore it is difficult.
Later, we will compare various algorithms on various problems, and we will therefore need a rigorous definition. To that end, let us consider the algorithm for purely random research. It is often used as a reference, because even a slightly intelligent algorithm must be able to do better (even if it is very easy to make worse, for example an algorithm being always blocked in a local minimum). Since the measurement of related difficulty is very seldom clarified (see however [BAR 05]), we will do it here quickly.
The selected definition is as follows: the difficulty of an optimization problem in a given search space is the probability of not finding a solution by choosing a position at random according to a uniform distribution. It is thus the probability of failure at the first attempt.
Consider the following examples. Take the function f defined in [0 1] by f(x) = x. The problem is “to find the minimum of this function nearest within s”. It is easy to calculate (assuming that
3.gif
is less than 1) that the difficulty of this problem, following the definition above, is given by the quantity (1 – ε). As we can see in Figure 1.1, it is simply the ratio of two measurements: the total number of acceptable solutions and the total number of possible positions (in fact, the definition of a probability). From this point of view, the minimization of x2 is twice as easy as that of x.
Figure 1.1. Assessing the difficulty. The intrinsic difficulty of a problem of the minimization of a function (in this case, the search for an item x for which f(x) is less than 0.2) has nothing to do with the apparent complication of the formula of the function. On the search space [0 1], it is the function x2 that is by far the easiest, whereas there is little to choose between the two others, function x being very slightly more difficult
ch1-image24-01.gif
It should be noted that this assessment of difficulty can depend on the presence of local minima. For example, Figure 1.2 represents part of the graph of a variant of the so-called “Alpine” function, f (x) = |xsin(x) + 0.1x|. For
3.gif
= 0.5 the field of the acceptable solutions is not connected. Of course, a part contains the position of the global minimum (0), but another part surrounds that of a local minimum whose value is less than ε. In other words, if the function presents local minima, and particularly if their values are close to that of the global minimum, one is quite able to obtain a satisfactory mathematical solution, but whose position is nevertheless very far from the hoped for solution.
By reducing the tolerance level (the acceptable error), one can certainly end up selecting only solutions actually located around the global minimum, but this procedure obviously increases the practical difficulty of the problem. Conversely, therefore, one tries to reduce the search space. But this requires some knowledge of the position of the solution sought and, moreover, it sometimes makes it necessary to define a search space that is more complicated than a simple Cartesian product of intervals; for example, a polyhedron, which may even be non-convex. However, we will see that this second item can be discussed in PSO by an option that allows an imperative constraint of the type g(position) < 0 to be taken into account.
Figure 1.2. A non-connected set of solutions. If the tolerance level is too high (here 0.5), some solutions can be found around a local minimum. Two different methods of avoiding this problem when searching for a global minimum are to reduce the tolerance level (which increases the practical difficulty of research) or to reduce the search space (which decreases the difficulty). But this second method requires that we have at least a vague idea of the position of the sought minimum
ch1-image25-01.gif

1.2. Estimation and practical measurement

When high precision is required, the probability of failure is very high and to take it directly as a measure of difficulty is not very practical. Thus we will use instead a logarithmic measurement given by the following formula:
difficulty = −ln(1 − failure probability) = −ln(success probability)
In this way one obtains more easily comparable numbers. Table 1.1 presents the results for four small problems. In each case, it is a question of reaching a minimal value. For the first three, the functions are continuous and one must accept a certain margin of error because that is what makes it possible to calculate the probability of success. The last problem is a classic “traveling salesman problem” with 27 cities, for which only one solution is supposed to exist. Here, the precision required is absolute: one wants to obtain this solution exactly.
Table 1.1. Difficulty of four problems compared. When the probabilities of success are very low, it is easier to compare their logarithms. The ways of calculating the difficulty are given at the end of the chapter. For the third function, it is only a rather pessimistic statistical estimate (in reality, one should be able to find a value less than the difficulty of the first function). For the traveling salesman problem (search for a Hamiltonian cycle of minimal length), it was supposed that there was only one solution, of value C; it must be reached exactly, without any margin of error
ch1-image26-01.gif
We see, for example, that the first and last problems are of the same level of intrinsic difficulty. It is therefore not absurd to imagine that the same algorithm, particularly if it uses randomness advisedly, can solve one as well as the other. Moreover, and we will return to this, the distinction between discrete/combinatorial problems and continuous problems is rather arbitrary for at least two reasons:
– a continuous problem becomes necessarily discrete, since it is treated on a numerical computer, hence with limited precision;
– a discrete problem can be replaced by an equivalent continuous problem under constraints, by interpolating the function defining it on the search space.

1.3. For “amatheurs”: some estimates of difficulty

The probability of success can be estimated in various ways, according to the form of the function:
– direct calculation by integration in the simple cases;
– calculation on a finite expansion, either of the function itse...

Indice dei contenuti