This textbook offers a concise yet rigorous introduction to calculus of variations and optimal control theory, and is a self-contained resource for graduate students in engineering, applied mathematics, and related subjects. Designed specifically for a one-semester course, the book begins with calculus of variations, preparing the ground for optimal control. It then gives a complete proof of the maximum principle and covers key topics such as the Hamilton-Jacobi-Bellman theory of dynamic programming and linear-quadratic optimal control.
Calculus of Variations and Optimal Control Theory also traces the historical development of the subject and features numerous exercises, notes and references at the end of each chapter, and suggestions for further study.
Offers a concise yet rigorous introduction
Requires limited background in control theory or advanced mathematics
Provides a complete proof of the maximum principle
Uses consistent notation in the exposition of classical and modern topics
Traces the historical development of the subject
Solutions manual (available only to teachers)
Leading universities that have adopted this book include:
University of Illinois at Urbana-Champaign ECE 553: Optimum Control Systems
Georgia Institute of Technology ECE 6553: Optimal Control and Optimization
University of Pennsylvania ESE 680: Optimal Control Theory
University of Notre Dame EE 60565: Optimal Control
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weāve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere ā even offline. Perfect for commutes or when youāre on the go. Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Calculus of Variations and Optimal Control Theory by Daniel Liberzon in PDF and/or ePUB format, as well as other popular books in Mathematics & Applied Mathematics. We have over one million books available in our catalogue for you to explore.
We begin by describing, very informally and in general terms, the class of optimal control problems that we want to eventually be able to solve. The goal of this brief motivational discussion is to ļ¬x the basic concepts and terminology without worrying about technical details.
The ļ¬rst basic ingredient of an optimal control problem is a control system. It generates possible behaviors. In this book, control systems will be described by ordinary differential equations (ODEs) of the form
(1.1)
where x is the state taking values in Rn, u is the control input taking values in some control set
, t is time, t0 is the initial time, and x0 is the initial state. Both x and u are functions of t, but we will often suppress their time arguments.
The second basic ingredient is the cost functional. It associates a cost with each possible behavior. For a given initial data (t0,x0), the behaviors are parameterized by control functions u. Thus, the cost functional assigns a cost value to each admissible control. In this book, cost functionals will be denoted by J and will be of the form
(1.2)
where L and K are given functions (running cost and terminal cost, respectively), tf is the ļ¬nal (or terminal) time which is either free or ļ¬xed, and xf := x(tf ) is the ļ¬nal (or terminal) state which is either free or ļ¬xed or belongs to some given target set. Note again that u itself is a function of time; this is why we say that J is a functional (a real-valued function on a space of functions).
The optimal control problem can then be posed as follows: Find a control u that minimizes J(u) over all admissible controls (or at least over nearby controls). Later we will need to come back to this problem formulation and ļ¬ll in some technical details. In particular, we will need to specify what regularity properties should be imposed on the function f and on the admissible controls u to ensure that state trajectories of the control system are well deļ¬ned. Several versions of the above problem (depending, for example, on the role of the ļ¬nal time and the ļ¬nal state) will be stated more precisely when we are ready to study them. The reader who wishes to preview this material can ļ¬nd it in Section 3.3.
It can be argued that optimality is a universal principle of life, in the sense that manyāif not mostāprocesses in nature are governed by solutions to some optimization problems (although we may never know exactly what is being optimized). We will soon see that fundamental laws of mechanics can be cast in an optimization context. From an engineering point of view, optimality provides a very useful design principle, and the cost to be minimized (or the proļ¬t to be maximized) is often naturally contained in the problem itself. Some examples of optimal control problems arising in applications include the following:
⢠Send a rocket to the moon with minimal fuel consumption.
⢠Produce a given amount of chemical in minimal time and/or with minimal amount of catalyst used (or maximize the amount produced in given time).
⢠Bring sales of a new product to a desired level while minimizing the amount of money spent on the advertising campaign.
⢠Maximize throughput or accuracy of information transmission over a communication channel with a given bandwidth/capacity.
The reader will easily think of other examples. Several speciļ¬c optimal control problems will be examined in detail later in the book. We brieļ¬y discuss one simple example here to better illustrate the general problem formulation.
Example 1.1Consider a simple model of a car moving on a horizontal line. Let
be the carās position and let u be the acceleration which acts as the control input. We put a bound on the maximal allowable acceleration by letting the control set U be the bounded int...