eBook - ePub
Particle Swarm Optimization
Maurice Clerc
This is a test
Partager le livre
- English
- ePUB (adapté aux mobiles)
- Disponible sur iOS et Android
eBook - ePub
Particle Swarm Optimization
Maurice Clerc
DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations
Ă propos de ce livre
This is the first book devoted entirely to Particle Swarm Optimization (PSO), which is a non-specific algorithm, similar to evolutionary algorithms, such as taboo search and ant colonies.
Since its original development in 1995, PSO has mainly been applied to continuous-discrete heterogeneous strongly non-linear numerical optimization and it is thus used almost everywhere in the world. Its convergence rate also makes it a preferred tool in dynamic optimization.
Foire aux questions
Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier lâabonnement ». Câest aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via lâapplication. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă la bibliothĂšque et Ă toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode dâabonnement : avec lâabonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă 12 mois dâabonnement mensuel.
Quâest-ce que Perlego ?
Nous sommes un service dâabonnement Ă des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă toute une bibliothĂšque pour un prix infĂ©rieur Ă celui dâun seul livre par mois. Avec plus dâun million de livres sur plus de 1 000 sujets, nous avons ce quâil vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Ăcouter sur votre prochain livre pour voir si vous pouvez lâĂ©couter. Lâoutil Ăcouter lit le texte Ă haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, lâaccĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Particle Swarm Optimization est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă Particle Swarm Optimization par Maurice Clerc en format PDF et/ou ePUB ainsi quâĂ dâautres livres populaires dans Computer Science et Artificial Intelligence (AI) & Semantics. Nous disposons de plus dâun million dâouvrages Ă dĂ©couvrir dans notre catalogue.
Informations
PART I
Particle Swarm Optimization
Chapter 1
What is a Difficult Problem?
1.1. An intrinsic definition
As regards optimization, certain problems are regarded as more difficult than others. This is the case, inter alia, for combinatorial problems. But what does that mean? Why should a combinatorial problem necessarily be more difficult than a problem in continuous variables and, if this is the case, to what extent is it so? Moreover, the concept of difficulty is very often more or less implicitly related to the degree of sophistication of the algorithms in a particular research field: if one cannot solve a particular problem, or it takes a considerable time to do so, therefore it is difficult.
Later, we will compare various algorithms on various problems, and we will therefore need a rigorous definition. To that end, let us consider the algorithm for purely random research. It is often used as a reference, because even a slightly intelligent algorithm must be able to do better (even if it is very easy to make worse, for example an algorithm being always blocked in a local minimum). Since the measurement of related difficulty is very seldom clarified (see however [BAR 05]), we will do it here quickly.
The selected definition is as follows: the difficulty of an optimization problem in a given search space is the probability of not finding a solution by choosing a position at random according to a uniform distribution. It is thus the probability of failure at the first attempt.
Consider the following examples. Take the function f defined in [0 1] by f(x) = x. The problem is âto find the minimum of this function nearest within sâ. It is easy to calculate (assuming that is less than 1) that the difficulty of this problem, following the definition above, is given by the quantity (1 â Δ). As we can see in Figure 1.1, it is simply the ratio of two measurements: the total number of acceptable solutions and the total number of possible positions (in fact, the definition of a probability). From this point of view, the minimization of x2 is twice as easy as that of x.
It should be noted that this assessment of difficulty can depend on the presence of local minima. For example, Figure 1.2 represents part of the graph of a variant of the so-called âAlpineâ function, f (x) = |xsin(x) + 0.1x|. For = 0.5 the field of the acceptable solutions is not connected. Of course, a part contains the position of the global minimum (0), but another part surrounds that of a local minimum whose value is less than Δ. In other words, if the function presents local minima, and particularly if their values are close to that of the global minimum, one is quite able to obtain a satisfactory mathematical solution, but whose position is nevertheless very far from the hoped for solution.
By reducing the tolerance level (the acceptable error), one can certainly end up selecting only solutions actually located around the global minimum, but this procedure obviously increases the practical difficulty of the problem. Conversely, therefore, one tries to reduce the search space. But this requires some knowledge of the position of the solution sought and, moreover, it sometimes makes it necessary to define a search space that is more complicated than a simple Cartesian product of intervals; for example, a polyhedron, which may even be non-convex. However, we will see that this second item can be discussed in PSO by an option that allows an imperative constraint of the type g(position) < 0 to be taken into account.
1.2. Estimation and practical measurement
When high precision is required, the probability of failure is very high and to take it directly as a measure of difficulty is not very practical. Thus we will use instead a logarithmic measurement given by the following formula:
difficulty = âln(1 â failure probability) = âln(success probability)
In this way one obtains more easily comparable numbers. Table 1.1 presents the results for four small problems. In each case, it is a question of reaching a minimal value. For the first three, the functions are continuous and one must accept a certain margin of error because that is what makes it possible to calculate the probability of success. The last problem is a classic âtraveling salesman problemâ with 27 cities, for which only one solution is supposed to exist. Here, the precision required is absolute: one wants to obtain this solution exactly.
We see, for example, that the first and last problems are of the same level of intrinsic difficulty. It is therefore not absurd to imagine that the same algorithm, particularly if it uses randomness advisedly, can solve one as well as the other. Moreover, and we will return to this, the distinction between discrete/combinatorial problems and continuous problems is rather arbitrary for at least two reasons:
â a continuous problem becomes necessarily discrete, since it is treated on a numerical computer, hence with limited precision;
â a discrete problem can be replaced by an equivalent continuous problem under constraints, by interpolating the function defining it on the search space.
1.3. For âamatheursâ: some estimates of difficulty
The probability of success can be estimated in various ways, according to the form of the function:
â direct calculation by integration in the simple cases;
â calculation on a finite expansion, either of the function itse...