Business
Lagrangian Multiplier Method
The Lagrangian multiplier method is a mathematical technique used to optimize a function subject to one or more constraints. It involves introducing a new variable, the Lagrange multiplier, to incorporate the constraints into the optimization process. By setting up and solving the Lagrangian equation, businesses can find the optimal values for decision variables while satisfying the given constraints.
Written by Perlego with AI-assistance
Related key terms
1 of 5
7 Key excerpts on "Lagrangian Multiplier Method"
- eBook - PDF
The Nonlinear Workbook
Chaos, Fractals, Cellular Automata, Neural Networks, Genetic Algorithms, Gene Expression Programming, Support Vector Machine, Wavelets, Hidden Markov Models, Fuzzy Logic with C++, Java and SymbolicC++ Programs
- Willi-Hans Steeb(Author)
- 2008(Publication Date)
- WSPC(Publisher)
Chapter 15 Optimization 15.1 Lagrange Multiplier Method In mathematical optimization problems, Lagrange multipliers are a method for deal-ing with equality constraints. The Lagrange multiplier method is as follows. Let M be a manifold and f be a real valued function of class C (2) on some open set con-taining M . We consider the problem of finding the extrema of the function f | M . This is called a problem of constrained extrema. Assume that f has a constrained extremum at x ∗ = ( x ∗ 1 , x ∗ 2 , . . . , x ∗ n ). Let g 1 ( x ) = 0 , . . . , g m ( x ) = 0 be the constraints (manifolds) with m < n . We assume that f and g j ( j = 1 , . . . , m ) are continuously differentiable in a neighbourhood of x ∗ . Then there exist real num-bers λ 1 , . . . , λ m such that x ∗ is a critical point of the function (called the Lagrange function ) L ( x ) := f ( x ) + λ 1 g 1 ( x ) + · · · + λ m g m ( x ) . The numbers λ 1 , . . . , λ m are called Lagrange multipliers. Thus we have to solve ∇ L ( x ∗ ) = 0 g j ( x ∗ ) = 0 , j = 1 , 2 , . . . , m with respect to x ∗ 1 , . . . , x ∗ n , λ ∗ 1 , . . . , λ ∗ m . Here ∇ denotes the gradient and we have to assume that the rank of the matrix ∇ g ( x ∗ ) is m and we assume that m < n . We have the following theorem. Theorem. Let f : R n → R be a twice continuously differentiable function in an open set Ω ⊆ R n . Let S be an open set S ⊆ R n . Let g = ( g 1 , g 2 , . . . , g m ) : S → R m be twice continuously differentiable, and assume that m < n . Let X 0 be the subset of S where g vanishes, that is X 0 := { x ∈ S : g ( x ) = 0 } . 441 442 CHAPTER 15. OPTIMIZATION Suppose that x ∗ ∈ X 0 and assume that there is a neighbourhood N of x ∗ such that f achieves maximum or minimum at x ∗ in N ∩ X 0 . Also assume that the determinant of the m × m matrix ( ∂g i ( x ∗ ) /∂x j ) does not vanish. - eBook - ePub
Design and Optimization of Thermal Systems, Third Edition
with MATLAB Applications
- Yogesh Jaluria(Author)
- 2019(Publication Date)
- CRC Press(Publisher)
Calculus methods, whenever applicable, provide a fast and convenient method to determine the optimum. They also indicate the basic considerations in optimization and the characteristics of the problem under consideration. In addition, some of the ideas and procedures used for these methods are employed in other techniques. Therefore, it is important to understand this optimization method and the basic concepts introduced by this approach. This chapter presents the Lagrange multiplier method, which is based on the differentiation of the objective function and the constraints. The physical interpretation of this approach is brought out and the method is applied to both constrained and unconstrained optimization. The sensitivity of the optimum to changes in the constraints is discussed. Finally, the application of this method to thermal systems is considered.8.2THE LAGRANGE MULTIPLIER METHOD
This is the most important and useful method for optimization based on calculus. It can be used to optimize functions that depend on a number of independent variables, with and without functional constraints. As such, it can be applied to a wide range of practical circumstances provided the objective function and the constraints can be expressed as continuous and differentiable functions. In addition, only equality constraints can be considered in the optimization process.8.2.1Basic Approach
The mathematical statement of the optimization problem was given in the preceding chapter as
subject to the constraintsU (x 1,x 2,x 3, … ,x n) → Optimum(8.4) = 0G 1(x 1,x 2,x 3, … ,x n)⋮= 0G 2(x 1,x 2,x 3, … ,x n)= 0G m(x 1,x 2,x 3, … ,x n)(8.5) where U is the objective function that is to be optimized and Gi = 0, with i varying from 1 to m, represents the m - eBook - PDF
Incentives
Motivation and the Economics of Information
- Donald E. Campbell(Author)
- 2018(Publication Date)
- Cambridge University Press(Publisher)
Therefore, even if the planner has no intention of deferring to the market system, prices are embedded in the mathematical logic of constrained maximization. They can be used to guide the system to the socially optimal menu of goods and services – that is, the one that maximizes U subject to resource and technology constraints. Moreover, when prices are used to guide decision making, it is far easier to design incentives to get producers and consumers to do their part in arriving at the optimal menu. When there are many variables and many constraints the Lagrangian technique is by far the most e ffi cient. And, as Example 2.12 demonstrates, once the planners start using Lagrangians they are using prices. The Lagrangian is the marginal value of an additional unit of the scarce resource that gives rise to the constraint, as we explain in greater depth in the next section . ∂ 2.3.3 Lagrangian Multipliers with More than One Resource Constraint Consider the problem maximize f ( x, y ) subject to g ( x, y ) ≤ a and h ( x, y ) ≤ b . We will not consider functions that depend on more than the two variables x and y , nor will we have more than the two constraints g and h . The two-variable, two-constraint case will provide su ffi cient insight. The function f represents the goal or objective, and we want to pick the values of x and y that maximize f . But constraints g and h restrict the values of x and y that can be selected. For instance, f might refer to the value to society of the plan ( x, y ) with g and h re fl ecting resource utilization by the plan of two inputs A and B – labor and capital, say. Then a and b denote the total amounts available of A and B , respectively. The plan ( x, y ) uses g ( x, y ) units of labor, and that cannot exceed the total amount of labor, a , in the economy. Similarly, the plan ( x, y ) uses h ( x, y ) units of capital but the economy has only b units of capital available. - eBook - PDF
The Nonlinear Workbook
Chaos, Fractals, Cellular Automata, Genetic Algorithms, Gene Expression Programming, Support Vector Machine, Wavelets, Hidden Markov Models, Fuzzy Logic with C++, Java and SymbolicC++ Programs
- Willi-Hans Steeb(Author)
- 2011(Publication Date)
- WSPC(Publisher)
Chapter 12 Optimization 12.1 Lagrange Multiplier Method In mathematical optimization problems, Lagrange multipliers are a method for find-ing the minima and maxima of a differentiable function f with equality constraints. The Lagrange multiplier method could also fail even if there is a solution. The Lagrange multiplier method is as follows (Protter [168]). Let M be a manifold and f be a real valued function of class C (2) on some open set containing M . We consider the problem of finding the extrema of the function f | M . This is called a problem of constrained extrema. Assume that f has a constrained extremum at x * = ( x * 1 , x * 2 , . . . , x * n ). Let g 1 ( x ) = 0 , . . . , g m ( x ) = 0 be the constraints (manifolds) with m < n . We assume that f and g j ( j = 1 , . . . , m ) are continuously differentiable in a neighbourhood of x * . Then there exist real num-bers λ 1 , . . . , λ m such that x * is a critical point of the function (called the Lagrange function ) L ( x ) := f ( x ) + λ 1 g 1 ( x ) + · · · + λ m g m ( x ) . The numbers λ 1 , . . . , λ m are called Lagrange multipliers. Thus we have to solve ∇ L ( x * ) = 0 and g j ( x * ) = 0 , j = 1 , 2 , . . . , m with respect to x * 1 , . . . , x * n , λ * 1 , . . . , λ * m . Here ∇ denotes the gradient and we have to assume that the rank of the matrix ∇ g ( x * ) is m and we assume that m < n . We will see later that the Lagrange multiplier method can fail even if there is a solution. 297 298 CHAPTER 12. OPTIMIZATION We have the following theorem. Theorem. Let f : R n → R be a twice continuously differentiable function in an open set Ω ⊆ R n . Let S be an open set S ⊆ R n . Let g = ( g 1 , g 2 , . . . , g m ) : S → R m be twice continuously differentiable, and assume that m < n . Let X 0 be the subset of S where g vanishes, that is X 0 := { x ∈ S : g ( x ) = 0 } . Suppose that x * ∈ X 0 and assume that there is a neighbourhood N of x * such that f achieves maximum or minimum at x * in N ∩ X 0 . - eBook - PDF
- Hussein K. Abdel-Aal, Mohammed A. Alsahlawi(Authors)
- 2013(Publication Date)
- CRC Press(Publisher)
In this method, the constraints as 220 Petroleum Economics and Engineering multiples of a Lagrangian multiplier, λ , are subtracted from the objective function. The combined equation is called the Lagrangian function. To dem-onstrate this method, consider the above nonlinear problem (Example 10.8). Minimize TC subject to = + -+ = 3 6 20 2 2 x y xy x y . Rearranging the constraint to bring all the terms to the left of the equal sign, the following is obtained: x + y − 20 = 0 Multiplying this form of the constraint by λ , the Lagrangian multiplier, and adding (subtracting in case of maximization) the result to the original objec-tive function will yield the Lagrangian function: L = + -+ + -3 6 20 2 2 x y xy x y λ ( ) The Lagrangian function can be treated as an unconstrained minimization problem. The partial derivative of the Lagrangian function with respect to each of the three unknown variables x, y, and λ needs to be determined. These are as follows: ∂ ∂ = -+ ∂ ∂ = -+ ∂ ∂ = + -LTC/ X LTC/ Y LTC/ 6 12 20 y Y Y X X Y λ λ λ Setting the above equations equal to zero will result in a system of three equations and three unknowns: 6x − y + λ = 0 (10.8) −x + 12 Y + λ = 0 (10.9) x + Y − 20 = 0 (10.10) Equation (10.10) is the constraint condition imposed on the original optimi-zation problem. Solving the equations simultaneously will determine the values of x, y, and λ . These values are as follows: x y = = = -13 7 71 λ 221 Optimization Techniques The Lagrangian multiplier, λ , has an important economic interpretation. It indicates the marginal effect on the original objective function of imple-menting the constraint requirement by one unit. Here λ can be interpreted as the marginal reduction in total cost that would result if only 19 instead of 20 units of combined output were required. Although the Lagrangian method is more flexible than the substitution method, it can solve only small problems. As the problem size expands, computerized approaches should be used. - David S. K. Ting(Author)
- 2021(Publication Date)
- Wiley(Publisher)
Other challenges faced by the Lagrange Multiplier method shown are saddle points and local extrema. A good understanding of the problem at hand will avoid these potential pitfalls. Figure 8.4 Some challenges faced by Lagrange Multiplier method. Source: D. Ting. 8.3 Unconstrained, Multi‐Variable, Objective Function Let us increase the number of independent variables to more than one. For the case with two independent variables involved, i.e. y = y(x 1, x 2), such as that shown in Figure 8.5, we can deduce if the extremum is a minimum or maximum by checking the second derivatives, based on the following criteria: (8.5) then, it is a minimum. (8.6) then, it is a maximum. One can, and probably should, always check to see if the objective function is increasing or decreasing, by moving the independent variable(s) a little away from the deduced optimum. Figure 8.5 Extrema of a typical two‐independent‐variable objective function, y = f(x 1, x 2). The plotted surface corresponds to y = − / exp(+). Source: Y. Yang. The general expression for a multi‐variable objective function is (8.7) The Lagrange Multiplier equation for this unconstrained optimization is (8.8) In short, there are n expressions that are all equal to zero. These n equations can be solved to obtain the optimum, y*, which takes place at x 1 =, x 2 =, …, x n =. Example 8.2 Minimize our production of disorder Given Among others, Dean Kamen admittedly proclaimed, “I'm a human entropy producer.” In other words, we are but disorder (entropy) generators. Proper knowledge of entropy can enable us to accomplish the same task while producing considerably less disorder. The entropy generation of a compressed air energy storage system, used to mitigate the intermittent nature of renewable energy and the mismatch between supply and demand of a power grid (Ebrahimi et al., 2021), can be expressed as a function of the pressure change, x 1, and the thermal energy (heat) removal or addition, x 2- eBook - ePub
- Simant Ranjan Upreti(Author)
- 2016(Publication Date)
- CRC Press(Publisher)
Chapter 4Lagrange Multipliers
In this chapter, we introduce the concept of Lagrange multipliers. We show how the Lagrange Multiplier Rule and the John Multiplier Theorem help us handle the equality and inequality constraints in optimal control problems.4.1 Motivation
Consider the simplest optimal control problem, in which we wish to find the control function u that optimizessubject to the differential equation constraint(3.4) with the initial condition(3.5) At the optimum, it is necessary that the variation given by(3.6) is zero, while satisfying the differential equation constraint. Because the constraint ties y and u together, we cannot have δy or δu arbitrary and thus independent of each other in the above equation. Recall from the last chapter that when the variations are arbitrary, their coefficients are individually zero, thereby leading to the necessary conditions for the optimum. This simplification is, however, not possible with the above equation.Dealing with this problem is easy if Equation (3.5) could be integrated to provide an explicit solution for y in terms of u. Then one could substitute y = y(u) into Equation (3.4) and obtain I in terms of u alone. However, this approach fails in most problems where analytical solutions of the involved constraints are simply not possible.4.2 Role of Lagrange Multipliers
The above difficulty is surmounted by introducing an undetermined function, λ(t), called Lagrange multiplier, in the augmented objective functional defined as(3.7) At the optimum, the variation of J is(4.1) where the role of λ is to untie y from u by assuming certain values in the interval [0, tf ]. Given such a λ, we are then able to vary δy and δu arbitrarily and independently of each other. This ability leads to the simplified necessary conditions for the optimum of J and, equivalently, for the constrained optimum of I. The conditions include an additional equation for λ
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.






