Optimizing Engineering Problems through Heuristic Techniques
eBook - ePub

Optimizing Engineering Problems through Heuristic Techniques

Kaushik Kumar, Divya Zindani, J. Paulo Davim

Share book
  1. 138 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Optimizing Engineering Problems through Heuristic Techniques

Kaushik Kumar, Divya Zindani, J. Paulo Davim

Book details
Book preview
Table of contents
Citations

About This Book

This book will cover heuristic optimization techniques and applications in engineering problems. The book will be divided into three sections that will provide coverage of the techniques, which can be employed by engineers, researchers, and manufacturing industries, to improve their productivity with the sole motive of socio-economic development. This will be the first book in the category of heuristic techniques with relevance to engineering problems and achieving optimal solutions.

Features

  • Explains the concept of optimization and the relevance of using heuristic techniques for optimal solutions in engineering problems
  • Illustrates the various heuristics techniques
  • Describes evolutionary heuristic techniques like genetic algorithm and particle swarm optimization
  • Contains natural based techniques like ant colony optimization, bee algorithm, firefly optimization, and cuckoo search
  • Offers sample problems and their optimization, using various heuristic techniques

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Optimizing Engineering Problems through Heuristic Techniques an online PDF/ePUB?
Yes, you can access Optimizing Engineering Problems through Heuristic Techniques by Kaushik Kumar, Divya Zindani, J. Paulo Davim in PDF and/or ePUB format, as well as other popular books in Computer Science & Data Mining. We have over one million books available in our catalogue for you to explore.

Information

Publisher
CRC Press
Year
2019
ISBN
9781351049566
Edition
1

PART I

EVOLUTIONARY TECHNIQUES

2 Genetic Algorithm

2.1 INTRODUCTION

Computational intelligence is one of the fastest growing fields together with evolutionary computation in optimization sciences. There are number of optimization algorithms to solve real-world complex problems. Such algorithms mimic mostly the biology surrounding the nature. Most of the evolutionary algorithms have a similar framework. They incept with a population of random solution. The suitability of the solution obtained is adjudged through a fitness function. Through a number of iterations the solution obtained at each step is improved and the best one is chosen. Next set of solutions are then generated through combination of achieved best solution and stochastic selections. There are several random components associated with an evolutionary algorithm that select and combine solutions in each population. Therefore in comparison to the deterministic algorithms the evolutionary algorithms are unreliable in finding suitable solutions. Same solutions are obtained at each and every step by deterministic algorithms. However, slower speed and possibility of getting stagnated at local solution are the major problems of deterministic algorithms when applied to large-scale problems.
Evolutionary algorithms are heuristics and stochastic. This means heuristic information is employed to search part of search space. These algorithms promise to search only selected regions of the solution space through finding best solution in each population and then use the generated solutions to improve other solutions. Evolutionary algorithms are now being used on large-scale applications and therefore has gained wider popularity and flexibility. Consideration of optimization problems as black boxes is another advantage associated with the evolutionary algorithms. Genetic algorithm (GA) is one of the first and well-known evolutionary algorithms. The present chapter therefore discusses and analyzes GA.

2.2 GENETIC ALGORITHM

GA is inspired by theory of biological evolution that was proposed by Darwin (Holland, 1992; Goldberg and Holland, 1988). Survival of the fittest is the main mechanism which is simulated in the GA. Fitter has the highest probability of survival in nature. They transfer their genes to the next generation. In due course of time, the genes that allow species to be adaptable to the environment become dominant and play a vital role in the survival of the species of next generations.
GA is inspired by the chromosomes and genes and therefore reflects a true representation of an optimization problem wherein chromosome is representative of a solution and each variable of the optimization problem is represented by a gene. As for instance, an optimization problem will have ten number of genes and chromosomes if it has ten variables. Selection, crossover and mutation are the three main operators that are employed by the GA to improve the solution or the chromosome. Following sub-sections depict on these steps and also the representation of the optimization problem and the initial population.
A chromosome is made from genes that represents the variable set of a given optimization problem. The first step to use GA is to formulate the problem and define the parameters in the form of a vector. Binary and continuous are the two variants of GA. Each gene is assigned two values in case of a binary GA, whereas continuous values are assigned in case of a continuous GA. Any continuous value having upper and lower bounds can be used in case of the continuous GA variant. A special case of binary GA is wherein there are more than two values to make a suitable choice. In such special cases, more memory i.e., bits must be allocated to the variables of the problem. As for instance if an optimization problem has two variables each of which can be assigned eight different values, then for each variable, there is a requirement of three genes each. Hence number of genes for variable with n discrete values will be log2n. Genes can be used until they are fed into fitness function and result in a fitness value. GA is referred to as genetic programming if different parts of a computer program is employed for each gene.
Set of random genes incepts the GA process. Equation (2.1) is used in case of binary GA:
Xi={1ri<0.50otherwise(2.1)
where i-th gene is represented by Xi and ri is any random number between 0 and 1.
Equation (2.2) is used in case of continuous GA to randomly initialize the genes:
Xi= (ubi–lbi)*ri+lbi(2.2)
The upper bound for the i-th gene is represented by ubi and the lower bound by lbi.
The main objective of the initial population phase is to have uniformly distributed random solutions for all the variables. This is because these will be used subsequently in the following operators.
Natural selection is simulated by the selection operator of GA. The chance of survival is proportionally increased to fitness in case of natural selection. The genes are propagated to be adapted by the subsequent generations after being selected.
The fitness values are normalized and mapped to the probability values by the roulette wheel. The upper and lower bound of roulette wheel are 1 and 0, respectively. One of the individuals will be selected by generating a random number within this interval. The chances for an individual to get selected is represented by the larger sectorial area occupied by the individual in the roulette wheel.
However, one pertinent question that may arise in the mind of readers is that why the poor individuals are not discarded. It is worth noting that even the individuals that have lower fitness value may also be able to mate and contribute toward subsequent generation production. However, this is dependent on other important factors such as competition, territory and environmental situations. An individual with poor fitness value may have chance to produce excellent features in conjunction with genes of other individuals. Hence by not discarding poor solutions, a chance is given to the poor individuals so that good features remain.
Since the range of values changes and is problem dependent, normalization of values is very important. One of the issues that surround the roulette wheel is that it fails in handling the negative values. Therefore the negative values must be mapped to positive ones through fitness scaling as negative values may impact during the cumulative sum process.
Some of the other selection operators (Genlin, 2004) besides roulette wheel are: steady-state reproduction (Syswerda, 1989), proportional selection (Grefenstette, 1989), fuzzy selection (Ishibuchi and Yamamoto, 2004), truncation selection (Blickle and Thiele, 1996), rank selection (Kumar, 2012), Boltzmann selection (Goldberg, 1990), linear rank selection (Grefenstette, 1989), fitness uniform selection (Hutter, 2002), local selection (Collins and Jefferson, 1991), steady-state selection (Syswerda, 1991) and tournament selection (Miller and Goldberg, 1995).
The natural selection process aids in selection of individuals for the crossover step and are treated as parents. This allows for gene exchange between individuals to produce new solutions. Literature suggests a number of different methods of crossover. The chromosome is divided into two or three pieces in case of the easiest of the methods (Shenoy et al., 2005). The genes between the chromosomes are then exchanged. This can be visualized in Figure 2.2 clearly.
The chromosomes of two parent solutions are swapped with each other in the single-point crossover and therefore there is one crossover point. However, in case of double-point crossover, there are two crossover points i.e., the chromosomes of the parent solutions swap between these points. The other techniques of crossover as mentioned in different literatures are: uniform crossover (Semenkin and Semenkina, 2012), three parents crossover (Tsutsui et al., 1999), cycle crossover (Smith and Holland, 1987), position-based crossover (Fonseca and Fleming, 1995), masked crossover (Louis and Rawlins, 1991), half-uniform crossover (Hu and Di Paolo, 2009), partially matched crossover (BĂ€ck et al., 2018), order crossover (Davis, 1985), heuristic crossover (Fogel and Atma, 1990) and multi-point crossover (Eshelman et al., 1989).
The overall objective of the crossover step is to ensure that genes are exchanged and the children inherit the genes from the parent solutions. The main mechanism of exploration in GA is the crossover step. There can be crossover using random points and hence the GA is trying to check and search for different combinations of genes coming from parents. This step therefore aids in exploration of possible solutions without the introduction of new genes.
Probability of the crossover i.e., Pc is an important parameter in GA ...

Table of contents