Mathematics

Dynamic Programming

Dynamic Programming is a method for solving complex problems by breaking them down into simpler subproblems and solving each subproblem only once, storing the solution to avoid redundant calculations. It is often used in optimization problems and is based on the principle of overlapping subproblems and optimal substructure. This approach can significantly improve the efficiency of solving problems with recursive structures.

Written by Perlego with AI-assistance

11 Key excerpts on "Dynamic Programming"

  • Book cover image for: The Art of Algorithm Design
    • Sachi Nandan Mohanty, Pabitra Kumar Tripathy, Suneeta Satpathy(Authors)
    • 2021(Publication Date)
    4

    Dynamic Programming

    DOI: 10.1201/9781003093886-4

    4.1   Dynamic Programming

    Dynamic Programming is a very powerful technique to solve a particular class of problems. It demands very elegant formulation of the approach and simple thinking, and the coding part is very easy. The idea is very simple: if you have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again. If the given problem can be broken up into smaller subproblems and these smaller subproblems are in turn divided into still-smaller ones, and in this process, if you observe some overlapping subproblems, then it’s a big hint for DP. In addition, the optimal solutions to the subproblems contribute to the optimal solution of the given problem (referred to as the optimal substructure property).
    There are two ways of doing this:
    1. Top-Down : Start solving the given problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer. This is usually easy to think of and very intuitive. This is referred to as Memoization .
    2. Bottom-Up : Analyze the problem and see the order in which the subproblems are solved and start solving from the trivial subproblem, up towards the given problem. In this process, it is guaranteed that the subproblems are solved before solving the problem. This is referred to as Dynamic Programming .
    Dynamic Programming is often used to solve optimization problems. In these cases, the solution corresponds to an objective function whose value needs to be optimal (e.g., maximal or minimal). In general, it is sufficient to produce one optimal solution, even though there may be many optimal solutions for a given problem.
  • Book cover image for: An Elementary Approach to Design and Analysis of Algorithms
    • Lekh Raj Vermani, Shalini Vermani(Authors)
    • 2019(Publication Date)
    • WSPC (EUROPE)
      (Publisher)
    Chapter 6

    Dynamic Programming

    Dynamic Programming solves problems by combining the solutions to sub-problems. In this procedure, a given problem is partitioned into some sub-problems which are not independent but share the characteristics of the original problem, i.e., the sub-problems are similar in nature to the problem itself. Moreover, some sub-problems of the original problem occur in some other sub-problems. Every sub-problem is solved just once, its answer is saved in a table and avoids the work of re-computing the answer every time.
    Dynamic Programming is applied to optimization problems for which there can be more possible solutions than just one. Each solution to a problem or a sub-problem has a value (cost or profit or distance) and we wish to find a solution with the optimal value. Such a solution is called an optimal solution.
    As opposed to simplex method and dual simplex method for solving linear programming problem, there is no simple unique method(s) for solving some typical dynamic problems and a solution is to be devised for every class of some typical problems. However, one thing is common in most problems solved using Dynamic Programming: Given the constraints and other available data of the problem a difference equation is formed which is then solved in a step-by-step method. We explain the process of solutions using Dynamic Programming for some classes of typical problems.
    Following are the essential steps to develop a Dynamic Programming algorithm for optimization problems. 1.Characterize the structure of an optimal solution to the problem. 2.Define the value of an optimal solution in a recursive manner (i.e., obtain a difference equation for an optimal value). 3.Compute the value of an optimal solution (i.e., an optimal value) in a bottom up fashion. 4.From the computed information construct an optimal solution, if required.

    6.1.Matrix-Chain Multiplication

    Let a sequence or chain (A1 , A2 , . . . , A
    n
    ) of n matrices compatible for multiplication be given. Recall that matrices A, B are compatible with respect to multiplication if the number of columns of A equals the number of rows of B and the result AB is a matrix the number of rows of which is equal to the number of rows of A and the number of columns of AB equals the number of columns of B. Also the product is obtained by using pqr scalar multiplications when A is a p × q matrix and B is a q × r matrix. The time taken for the multiplication is proportional to the number pqr of scalar multiplications and the time taken on addition of scalars is negligible and, so, is ignored. We thus say that the time taken for the multiplication AB is pqr time units. If we have a chain A1 , A
  • Book cover image for: Electric Power System Applications of Optimization
    • James A. Momoh(Author)
    • 2017(Publication Date)
    • CRC Press
      (Publisher)
    8

    Dynamic Programming

    8.1 Introduction

    Dynamic Programming (DP) is an optimization approach that transforms a complex problem into a sequence of simpler problems [3 6 ,9 ,12 14 ]; its essential characteristic is the multistage nature of the optimization procedure. More so than the optimization techniques described previously, DP provides a general framework for analyzing many problem types. Within this framework a variety of optimization techniques can be employed to solve particular aspects of a more general formulation. Usually creativity is required before we can recognize that a particular problem can be cast effectively as a dynamic program, and often subtle insights are necessary to restructure the formulation so that it can be solved effectively.
    The DP method was developed in the 1950s through the work of Richard Bellman [1 ,2 ] who is still the doyen of research workers in this field. The essential feature of the method is that a multivariable optimization problem is decomposed into a series of stages, optimization being done at each stage with respect to one variable only. Bellman [1] gave it the rather undescriptive name of DP. A more significant name would be recursive optimization.
    Both discrete and continuous problems can be amenable to this method and deterministic as well as stochastic models can be handled by it. The complexities increase tremendously with the number of constraints. A single-constraint problem is relatively simple, but even more than two constraints can be formidable.
    The DP technique, when applicable, represents or decomposes a multistage decision problem as a sequence of single-stage decision problems. Thus an N -variable problem is represented as a sequence of N single-variable problems that are solved successively. In most of the cases, these N subproblems are easier to solve than the original problem. The decomposition to N subproblems is done in such a manner that the optimal solution of the original N -variable problem can be obtained from the optimal solutions of the N one-dimensional problems. It is important to note that the particular optimization technique used for the optimization of the N
  • Book cover image for: Algorithm Design with Haskell
    PART FIVE Dynamic Programming 311 The term Dynamic Programming was coined by Richard Bellman in 1950 to de- scribe his research into multi-stage decision processes. The word programming was chosen as a synonym for planning to mean the process of determining the sequence of decisions that have to be made, while dynamic suggested the evolution of the system over time. These days, Dynamic Programming as a technique of algorithm design means something much more specific. It involves a two-stage process in which a problem, usually but not necessarily an optimisation problem, is formulated in recursive terms and then some efficient way of computing the solution is found. Unlike a divide-and-conquer problem, the subproblems generated by the recursive solution can overlap, so naive execution of the recursive algorithm will involve solving the same subproblem many times over, possibly an exponential number of times. One way to understand the problem of overlap is to look at the dependency graph associated with a recursive function. This is a directed graph whose vertices represent function calls and whose directed edges show the dependency of each call on recursive calls. While the dependency graph of a divide-and-conquer algorithm is a tree of some kind with no shared vertices, the graph of a Dynamic Programming algorithm is an acyclic directed graph, possibly with many shared vertices. A vertex is shared if there is more than one incoming edge to the vertex. The first job in solving an optimisation problem by Dynamic Programming is simply to obtain a recursive solution. As with thinning algorithms, the key step is to exploit a suitable monotonicity condition. This condition enables an optimal solution to a problem to be expressed in terms of optimal sub-solutions. When the shape of the recursion is inductive, a thinning algorithm is appropriate; when it is not, the techniques of Dynamic Programming come into play.
  • Book cover image for: Analysis and Design of Algorithms- A Beginner's Hope
    • Shefali Singhal, Neha Garg, Shefali Singhal(Authors)
    • 2018(Publication Date)
    • BPB Publications
      (Publisher)
    HAPTER -5

    Dynamic Programming

    In this chapter student will understand:

    What is the dynamic method for problem-solving? How is it different from other strategies? Which are the main areas solved with Dynamic Programming? How is it different with the greedy method?

    5.1 Dynamic Programming

    Dynamic Programming usually applies to optimization problems in which a set of choices must be made to arrive at an optimal solution. As the choices are made, subproblems of the form of actual problem often arise. Dynamic Programming is effective when a given sub problem may arise from more than one partial set of choices; the basic idea is to store the solution to each such sub-problem so that we can refer this if the sub-problem will reappear.
    Unlike in divide-and-conquer algorithms, partition the problem into a set of independent sub problems, solve the subproblems recursively, and then combine their solutions to find solution of the original problem. In contrast, Dynamic Programming is applicable when the sub-problems are not independent, when sub-problems share same sub-sub-problems. A dynamic-programming algorithm solves every sub-sub-problem just once and then saves its answer in a table, thereby avoiding the work of recalculating the answer every time the sub-sub-problem is encountered.
    The development of a dynamic-programming algorithm is performed into a sequence of four steps:
    1. Characterize the structure of an optimal solution.
    2. Recursively defines the value of an optimal solution.
    3. Calculate the value of an optimal solution in a bottom-up fashion.
    4. Construct an optimal solution from computed information.

    5.2 Matrix Chain Multiplication

    Using this algorithm, for a given chain of matrices, our aim is to find the most efficient way to multiply these matrices together in minimal time. The problem is not actually to perform the multiplications, but to decide the order of the multiplications, which in turns results in minimum computation.
  • Book cover image for: Key dynamics in computer programming
    • Adele Kuzmiakova(Author)
    • 2023(Publication Date)
    • Arcler Press
      (Publisher)
    Dynamic Programming 6 CONTENTS 6.1. Introduction .................................................................................... 154 6.2. An Elementary Example .................................................................. 154 6.3. Formalizing the Dynamic-Programming Approach.......................... 163 6.4. Optimal Capacity Expansion........................................................... 167 6.5. Discounting Future Returns............................................................. 172 6.6. Shortest Paths in a Network............................................................. 173 6.7. Continuous State-Space Problems ................................................... 177 6.8. Dynamic Programming Under Uncertainty ..................................... 179 References ............................................................................................. 187 CHAPTER Key Dynamics in Computer Programming 154 6.1. INTRODUCTION Dynamic Programming is an optimization approach that transforms a complex problem into a sequence of simpler problems. The essential characteristic of Dynamic Programming is the multistage nature of the optimization procedure. More so than the optimization techniques described previously, Dynamic Programming provides a general framework for analyzing many problem types. Within this framework a variety of optimization techniques can be employed to solve particular aspects of a more general formulation. Usually, creativity is required before we can recognize that a particular problem can be cast effectively as a dynamic program; and often subtle insights are necessary to restructure the formulation so that it can be solved effectively (Amini et al., 1990; Osman et al., 2005). We begin by providing a general insight into the Dynamic Programming approach by treating a simple example in some detail.
  • Book cover image for: Design and Analysis of Algorithms
    eBook - PDF

    Design and Analysis of Algorithms

    A Contemporary Perspective

    5 C H A P T E R Optimization II: Dynamic Programming The idea behind Dynamic Programming is very similar to the concept of divide and conquer. In fact, one often specifies such an algorithm by writing down the recursive sub-structure of the problem being solved. If we directly use a divide and conquer strategy to solve such a problem, it can lead to an inefficient implementation. Consider the following example: the Fibonacci series is given by the sequence 1,1,2,3,5,8, ... If F n denotes the nth number in this sequence, then F 0 = F 1 = 1, and subsequently, F n = F n-1 + F n-2 . This immediately gives a divide and conquer algorithm (see Figure 5.1) for the problem of computing F n for an input number n. However, this algorithm is very inefficient – it takes exponential time (see Section 1.1 regarding this aspect), even though there is a simple linear time algorithm for this problem. The reason why the divide and conquer algorithm performs so poorly is because the same recursive call is made multiple times. Figure 5.2 shows the recursive calls made while computing F 6 . This is quite wasteful and one way of handling this would be to store the results of a recursive call in a table so that multiple recursive calls for the same input can be avoided. Indeed a simple way of fixing this algorithm would be to have an array F [] of length n, and starting from i = 0 onward, we fill the entries F [i] in this array. Thus, Dynamic Programming is a divide and conquer strategy done in a careful manner. Typically, one specifies the table which should store all possible recursive calls that the algorithm will make. In fact, the final algorithm does not make any recursive calls. The entries in the table are computed such that whenever we need to solve a sub-problem, all the sub-problems appearing in the recursive call needed for this have Optimization II: Dynamic Programming 93 already been solved and stored in the table.
  • Book cover image for: A Textbook of Data Structures and Algorithms, Volume 3
    eBook - PDF

    A Textbook of Data Structures and Algorithms, Volume 3

    Mastering Advanced Data Structures and Algorithm Design Strategies

    • G. A. Vijayalakshmi Pai(Author)
    • 2022(Publication Date)
    • Wiley-ISTE
      (Publisher)
    21 Dynamic Programming In this chapter, the algorithm design technique of Dynamic Programming is detailed. The technique is demonstrated over 0/1 Knapsack Problem, Traveling Salesperson Problem, All-Pairs Shortest Path Problem and Optimal Binary Search Tree Construction. 21.1. Introduction Dynamic Programming is an effective algorithm design technique built on Bellman’s Principle of Optimality which states that ‘an optimal sequence of decisions has the property that whatever the initial state and decisions are, the remaining decisions must constitute an optimal decision sequence with regard to the state resulting from the first decision’. The Dynamic Programming strategy is applicable to optimization problems, which comprise an objective function that needs to be maximized or minimized, subject to constraints that need to be satisfied. A candidate solution that satisfies the constraints is termed a feasible solution and when a feasible solution results in the best objective function value, it is termed an optimal solution. The greedy method (discussed in Chapter 20) also works over problems whose characteristics are as defined above and obtains optimal solutions to these problems. How then is a greedy method different from Dynamic Programming? A greedy method works in an iterative fashion, selecting objects constituting the solution set one by one and constructing a feasible solution set, which eventually turns into an optimal solution set. However, there are problems where generating a single optimal decision sequence is not always possible and therefore, a greedy method might not work on these problems. 262 A Textbook of Data Structures and Algorithms 3 It is here that Dynamic Programming finds its rightful place. Dynamic Programming, unlike the greedy method, generates multiple decision sequences using Bellman’s principle of optimality and eventually delivers the best decision sequence that leads to the optimal solution to the problem.
  • Book cover image for: An Introduction to the Analysis of Algorithms
    • Michael Soltys(Author)
    • 2009(Publication Date)
    • WSPC
      (Publisher)
    Chapter 4 Dynamic Programming Dynamic Programming is an algorithmic technique that is closely related to the divide and conquer approach we saw in the previous chapter. However, while the divide and conquer approach is essentially recursive, and so “top down,” Dynamic Programming works “bottom up.” A Dynamic Programming algorithm creates an array of related but sim-pler subproblems, and then, it computes the solution to the big complicated problem by using the solutions to the easier subproblems which are stored in the array. We usually want to maximize profit or minimize cost. There are three steps in finding a Dynamic Programming solution to a problem: (i) Define a class of subproblems, (ii) give a recurrence based on solving each subproblem in terms of simpler subproblems, and (iii) give an algorithm for computing the recurrence. 4.1 Longest monotone subsequence problem Input: d, a 1 , a 2 , . . . , a d ∈ N . Output: L = length of the longest monotone non-decreasing subsequence. Note that a subsequence need not be consecutive, that is a i 1 , a i 2 , . . . , a i k is a monotone subsequence provided that 1 ≤ i 1 < i 2 < . . . < i k ≤ d, a i 1 ≤ a i 2 ≤ . . . ≤ a i k . For example, the length of the longest monotone subsequence (henceforth LMS) of { 4 , 6 , 5 , 9 , 1 } is 3. We first define an array of subproblems: R ( j ) = length of the longest monotone subsequence which ends in a j . The answer can be extracted from array R by computing L = max 1 ≤ j ≤ n R ( j ). 51 52 An Introduction to the Analysis of Algorithms The next step is to find a recurrence. Let R (1) = 1, and for j > 1, R ( j ) = braceleftBigg 1 if a i > a j for all 1 ≤ i < j 1 + max 1 ≤ i max and a i ≤ a j then max ← R ( i ) end if end for R ( j ) ← max +1 end for Problem 4.1.
  • Book cover image for: Dynamic Programming
    eBook - PDF

    Dynamic Programming

    Foundations and Principles, Second Edition

    • Moshe Sniedovich(Author)
    • 2010(Publication Date)
    • CRC Press
      (Publisher)
    So, my program for this chapter is as follows. I begin with a brief review of Dynamic Programming’s treatment of optimization problems, as delineated in Chapters 3-4. This analysis brings out its fundamental strategy, and it serves as a basis for the formulation of an abstract Dynamic Programming model. I then illustrate how this model is used to handle a number of rep-resentative non-optimization problems, and I conclude with some general remarks on Dynamic Programming. 16.1 Review Recall that our point of departure in Chapter 1 was the proposition that Dynamic Programming’s mode of operation is driven by the following Meta-recipe: · Embed your problem in a family of related problems. · Derive a relationship between the solutions to these problems. · Solve this relationship. · Recover a solution to your problem from this relationship. In Chapters 3-4 this Meta-recipe was given concrete content when it was put to work in the context a multistage decision model ( N, S, D, T, S 1 , g ) thus bringing to light the relationship between these four objects: What Then Is Dynamic Programming? 455 Problem P Problem P(s) Problem P(n,s) Problem P(n,s,x) So, let us remind ourselves of the formal definitions of these problems, and of the key moves leading to the derivation of the functional equation. Problem P : p := opt x ∈ X g ( x ) , x ∈ X ⊆ X ′ := X 1 ×···× X M (16.1) where g is a real-valued function on X . Let X ∗ denote the set of optimal solutions to this problem. Problem P ( s ) ,s ∈ S 1 : p ( s ) := opt ( x 1 ,...,x N ) g ( s, x 1 , x 2 , . . . , x N ) (16.2) s 1 = s (16.3) x n ∈ D ( n, s n ) , 1 ≤ n ≤ N (16.4) s n +1 = T ( n, s n , x n ) , 1 ≤ n ≤ N (16.5) Let X ∗ ( s ) denote the set of optimal solutions to Problem P ( s ) .
  • Book cover image for: Modern Control Engineering
    eBook - PDF

    Modern Control Engineering

    Pergamon Unified Engineering Series

    • Maxwell Noton(Author)
    • 2014(Publication Date)
    • Pergamon
      (Publisher)
    4 Dynamic Programming 4.1 HISTORICAL BACKGROUND Most problems involving a sequence of decision processes can be formulated as mathematical programming problems, i.e. linear or non-linear programming. However, it is often more efficient to solve such problems by Dynamic Programming, as devised by Bellman(65) in the I950's as a result of studies of multi-stage decision processes. Dynamic Programming hinges on the application of Bellman's so-called Principle of Optimality, which results in a basic recurrence relationship to be applied to the successive transitions of a process. The solution of the problem is derived by calculations which proceed in reverse sequence from the final to the initial state. Dynamic Programming has been applied to problems in numerous fields, e.g. production, purchasing and investment problems, distribution of drugs in the body, design of chemical plants involving cascaded reactors but especially to problems of control engineering and control theory. The application of the basic discrete form of Dynamic Programming to control problems is limited in practice by the dimensionality of such problems, although the reader is referred to a survey paper by Larson (66). Nevertheless Dynamic Programming is important to the study of control theory for the following reasons: (a) theoretical results for discrete-time control systems can be developed, a special case of which is the tractable solution of the linear-quadratic control problem, e.g. digital control of linear multivariable systems; (b) iterative computational procedures for discrete-time systems have been developed from a Dynamic Programming viewpoint; (c) the continuous form of dynamic 146 A Multi-Stage Decision Problem 147 programming provides a link with the classical calculus of variations and is indeed an alternate approach to the study of optimal control (67).
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.