Differential Evolution in Chemical Engineering
eBook - ePub

Differential Evolution in Chemical Engineering

Developments and Applications

  1. 452 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Differential Evolution in Chemical Engineering

Developments and Applications

About this book

-->

Optimization plays a key role in the design, planning and operation of chemical and related processes for several decades. Techniques for solving optimization problems are of deterministic or stochastic type. Of these, stochastic techniques can solve any type of optimization problems and can be adapted for multiple objectives. Differential evolution (DE), proposed about two decades ago, is one of the stochastic techniques. Its algorithm is simple to understand and use. DE has found many applications in chemical engineering.

This unique compendium focuses on DE, its recent developments and applications in chemical engineering. It will cover both single and multi-objective optimization. The book contains a number of chapters from experienced editors, and also several chapters from active researchers in this area.

--> Contents:

  • Part I:
    • Introduction (Shivom Sharma and Gade Pandu Rangaiah)
  • Part II:
    • Differential Evolution: Method, Developments and Chemical Engineering Applications (Shaoqiang Chen, Gade Pandu Rangaiah and Mekapati Srinivas)
    • Application of Differential Evolution in Chemical Reaction Engineering (Mohammad Reza Rahimpour and Nazanin Hamedi)
    • Differential Evolution with Tabu List for Global Optimization: Evaluation of Two Versions on Benchmark and Phase Stability Problems (Mekapati Srinivas and Gade Pandu Rangaiah)
    • Integrated Multi-Objective Differential Evolution and Its Application to Amine Absorption Process for Natural Gas Sweetening (Shivom Sharma, Gade Pandu Rangaiah and François Maréchal)
  • Part III:
    • Heat Exchanger Network Retrofitting Using Multi-Objective Differential Evolution (Bhargava Krishna Sreepathi, Shivom Sharma and Gade Pandu Rangaiah)
    • Phase Stability and Equilibrium Calculations in Reactive Systems Using Differential Evolution and Tabu Search (Adrián Bonilla-Petriciolet, Gade Pandu Rangaiah, Juan Gabriel Segovia-Hernández and José Enrique Jaime-Leal)
    • Integrated Synthesis and Differential Evolution Methodology for Design and Optimization of Distillation Processes (Massimiliano Errico, Carlo Edgar Torres-Ortega and Ben-Guang Rong)
    • Optimization of Intensified Separation Processes Using Differential Evolution with Tabu List (Eduardo Sánchez-Ramírez, Juan José Quiroz-Ramírez, César Ramírez-Márquez, Gabriel Contreras-Zarazúa, Juan Gabriel Segovia-Hernández and Adrián Bonilla-Petriciolet)
    • Process Development and Optimization of Bioethanol Recovery and Dehydration by Distillation and Vapor Permeation for Multiple Objectives (Ashish Singh and Gade Pandu Rangaiah)
    • Optimal Control of a Fermentation Process for Xylitol Production Using Differential Evolution (Laís Koop, Marcos Lúcio Corazza, Fernando Augusto Pedersen Voll and Adrián Bonilla-Petriciolet)
    • Nested Differential Evolution for Mixed-Integer Bi-Level Optimization for Genome-Scale Metabolic Networks (Feng-Sheng Wang)
    • Applications of Differential Evolution in Polymerization Reaction Engineering (Elena-Niculina Dragoi and Silvia Curteanu)

-->
--> Readership: Researchers, academics, professionals and graduate students in chemical engineering and optimization. -->

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weโ€™ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere โ€” even offline. Perfect for commutes or when youโ€™re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Differential Evolution in Chemical Engineering by Gade Pandu Rangaiah, Shivom Sharma in PDF and/or ePUB format, as well as other popular books in Technology & Engineering & Chemical & Biochemical Engineering. We have over one million books available in our catalogue for you to explore.

Part I

Chapter 1

Introduction

Shivom Sharma1 and Gade Pandu Rangaiah2,*
1Industrial Process and Energy Systems Engineering
ร‰cole Polytechnique Fรฉdรฉrale de Lausanne,
CH-1951 Sion, Switzerland

2Department of Chemical and Biomolecular Engineering
National University of Singapore, 117585 Singapore
*Corresponding author: [email protected]

1.1Process Optimization

Optimization is an approach to find the best possible solution in the domain of interest while satisfying relevant constraints (restrictions). Optimization problems can be found everywhere, from engineering to economics and from daily life to holiday planning. For example, holiday planning optimization finds the place(s) to visit, when to visit, how to travel and duration of stay (which are all decision variables) to achieve the most happiness (which is the objective function or performance criterion); this may have constraints on budget, travel dates and places to visit as well as other objectives such as safety.
Optimization has been fruitfully applied to improve the performance and/or understanding in diverse areas such as science, engineering, business and economics. The goal of optimization is to find the values of decision variables, which will maximize or minimize the value of a given objective function (performance criterion) without violating specified constraints. Mathematically, an optimization problem can be stated as follows.
figure
figure
Here, f1(x) is the given objective function, x is the set of decision variables with xL and xU as the lower and upper bounds, and g and h are the set of inequality and equality constraints, respectively. Many application problems have more than one decision variable and a number of inequality and/or equality constraints.
An optimization problem is generally assumed to have only one objective function as in equation (1.1a); such problems belong to single-objective optimization (SOO). Each of these problems will have one or more optimal solutions. Note that optimization refers to both minimization and maximization, and an optimum can be either a minimum or a maximum. A minimization objective can be transformed into a maximization objective by multiplying with โ€“1 or taking reciprocal (with a suitable modification to avoid division by zero). Similarly, a minimization method can easily be modified to a maximization method. Many books describe optimization for minimization, and we follow the same in this chapter. In other words, optimization and optimum are used as synonymous with minimization and minimum, respectively.
In the literature, numerous chemical engineering application problems have been optimized for single objective (e.g., see Himmelblau, 1972; Edgar et al., 2001; Ravindran et al., 2006; Rangaiah, 2010; Floudas, 2013). For example, optimization has been successfully applied in the design and operation of chemical and refinery processes, biotechnology, food technology, pharmaceuticals, fuel cells, power plants and bio-fuel production. Capital/equipment cost, operating cost, profit, net present value, energy consumption, efficiency, conversion, yield, selectivity, eco-indicator 99, global warming potential and CO2 emissions are the commonly used objective functions in process optimization problems.

1.2Classification of Optimization Methods

Optimization problems and methods can be classified in various ways using the characteristics summarized in Table 1.1. Some of these are briefly described in the following sub-sections. Many chemical engineering application problems have more than one variable and bounds on variables. Also, they often contain constraints arising from governing equations (such as mass and energy balances, and rate equations) and from process limitations (such as on maximum temperature, pressure and flow rate for safety and due to material of construction).
Table 1.1 Characteristics and classification of optimization problems and methods
Characteristic Classification
Number of variables: one or more Single variable or multivariable optimization
Type of variables: real, integer or mixed Nonlinear, integer or mixed (nonlinear) integer programming
Nature of equations: liner or nonlinear Linear or nonlinear programming
Constraints: no constraints (besides bounds) or with constraints Unconstrained or constrained optimization
Number of objectives: one or more Single-objective or multi-objective optimization
Derivatives: without or using derivatives Direct or gradient search optimization
Optimum: local or global in the search space Local or global optimization
Random numbers: without or using random numbers Deterministic or stochastic optimization methods
Trial points/solutions: one or more in each iteration Single point (also known as trajectory) or population based methods

1.2.1Use of derivatives

Optimization methods can be classified based on the use of derivate information. If the objective function and constraints are continuous and differentiable, then derivative-based methods such as steepest descent, quasi-Newton and successive quadratic programming (SQP) methods based on gradient vector can be used. These methods are computationally efficient, and give the same solution in different runs if the initial point is the same. Derivative-free methods (e.g., Nelder-Mead or downhill simplex) are used when the objective function or constraints have discontinuities. Both gradient-based and gradient-free methods can be used for solving SOO problems. Some of them are for unconstrained optimization whereas others for problems with constraints. For example, Nelder-Mead, steepest descent and quasi-Newton methods are for problems without constraints, whereas simplex, generalized reduced gradient (GRG) and SQP methods are for constrained optimization. For details on these methods, see Edgar et al. (2001) and Ravindran et al. (2006).

1.2.2Local and global methods

A given optimization problem may have more than one optimum. Fig. 1.1 illustrates this situation for both minima and maxima; in this figure, x-axis represents the search space in one or many decision variables, and the objective function (y-axis) can be for minimization or maximization. There are three local minima, two global minima, four local maxima and one global maximum in Fig. 1.1. By definition, a local minimum is the minimum in its nearby region whereas a global minimum is the lowest minimum over the entire search region (within bounds and satisfying constraints, if any). The objective function in Fig. 1.1 is neither convex nor concave over the entire region, and it is said to be multi-modal.
figure
Fig. 1.1 Local and global optima of an optimization problem
Based on their search capability, optimization methods can be classified into local and global methods. Local search methods generally converge to an optimum in the neighborhood of the initial/starting point. Nelder-Mead, steepest descent, quasi-Newton, GRG and SQP methods are local search methods. These methods require an initial point or solution for starting the search, and converge to a nearby minimum, which can be local or global minimum. On the other hand, global methods search the entire search space, have the capability to escape from the local optimum and to find the global optimum. Multi-start is a simple strategy for searching global optimum using local methods in conjunction with a number of initial points. Global optimization methods, particularly stochastic methods, may not be successful in every run, and they need more computation time compared to local optimization methods. Randomization is an important component of stochastic search methods. There are many global methods, and they are introduced in the next sub-sections.

1.2.3Deterministic or stochastic methods

Optimization methods can also be classified into deterministic and stochastic methods. In deterministic optimization methods, the search for optimum is not random, i.e., it does not depend on random numbers; rather, the search is determined according to the algorithm, optimization problem and initial point. Hence, the new solution found in each iteration does not depend on random numbers. The final/converged solution by a deterministic method depends on the initial point. Examples of deterministic optimization methods are Nelder-Mead (also known as downhill simplex), steepest descent, quasi-Newton, GRG and SQP methods, which are all local methods. They are computationally efficient and can locate the optimum precisely. If the optimization problem is multi-modal as in Fig. 1.1, they are likely to converge to a local minimum near the initial point/solution, thus failing to find the global minimum. Note that deterministic methods require continuous and differentiable objective function and constraints.
Stochastic optimization methods, on the other hand, employ random numbers in their search strategies and are more likely to find the global optimum but they generally require more computational time and give less precise optimum compared to the deterministic methods. Almost all of them do not require continuity and differentiability of equations in the optimization problem as well as the initial/starting solution. Hence, stochastic methods can be applied to any type of optimization problems including black-box problems, wherein only the effect of decision variables on the objective and/or constraints is known (and not the underlying mathematical equations and their nature). Since they use random numbers in their search, they may converge to slightly different solutions in different runs. Stochastic optimization methods are based on search using a single point/solution or population of points/solutions, and they are inspired by logic, physical and/or natural phenomena. Simulated annealing (SA), genetic algorithms (GA), differential evolution (DE), particle swarm optimization (PSO) and ant colony optimization (ACO) are stochastic SOO methods. They are also known as metaheuristic...

Table of contents

  1. Cover Page
  2. Title
  3. Copyright
  4. Preface
  5. About the Editors
  6. List of Contributors
  7. Contents
  8. Supplementary Materials
  9. Part I
  10. Part II
  11. Part III
  12. Index