Full metadata
Title
Extensions of the Dynamic Programming Framework
Description
Modern life is full of challenging optimization problems that we unknowingly attempt to solve. For instance, a common dilemma often encountered is the decision of picking a parking spot while trying to minimize both the distance to the goal destination and time spent searching for parking; one strategy is to drive as close as possible to the goal destination but risk a penalty cost if no parking spaces can be found. Optimization problems of this class all have underlying time-varying processes that can be altered by a decision/input to minimize some cost. Such optimization problems are commonly solved by a class of methods called Dynamic Programming (DP) that breaks down a complex optimization problem into a simpler family of sub-problems. In the 1950s Richard Bellman introduced a class of DP methods that broke down Multi-Stage Optimization Problems (MSOP) into a nested sequence of ``tail problems”. Bellman showed that for any MSOP with a cost function that satisfies a condition called additive separability, the solution to the tail problem of the MSOP initialized at time-stage k>0 can be used to solve the tail problem initialized at time-stage k-1. Therefore, by recursively solving each tail problem of the MSOP, a solution to the original MSOP can be found. This dissertation extends Bellman`s theory to a broader class of MSOPs involving non-additively separable costs by introducing a new state augmentation solution method and generalizing the Bellman Equation. This dissertation also considers the analogous continuous-time counterpart to discrete-time MSOPs, called Optimal Control Problems (OCPs). OCPs can be solved by solving a nonlinear Partial Differential Equation (PDE) called the Hamilton-Jacobi-Bellman (HJB) PDE. Unfortunately, it is rarely possible to obtain an analytical solution to the HJB PDE. This dissertation proposes a method for approximately solving the HJB PDE based on Sum-Of-Squares (SOS) programming. This SOS algorithm can be used to synthesize controllers, hence solving the OCP, and also compute outer bounds of reachable sets of dynamical systems. This methodology is then extended to infinite time horizons, by proposing SOS algorithms that yield Lyapunov functions that can approximate regions of attraction and attractor sets of nonlinear dynamical systems arbitrarily well.
Date Created
2021
Contributors
- Jones, Morgan (Author)
- Peet, Matthew M (Thesis advisor)
- Nedich, Angelia (Committee member)
- Kawski, Matthias (Committee member)
- Mignolet, Marc (Committee member)
- Berman, Spring (Committee member)
- Arizona State University (Publisher)
Topical Subject
Resource Type
Extent
282 pages
Language
eng
Copyright Statement
In Copyright
Primary Member of
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.2.N.168490
Level of coding
minimal
Cataloging Standards
Note
Partial requirement for: Ph.D., Arizona State University, 2021
Field of study: Aerospace Engineering
System Created
- 2022-08-22 04:00:28
System Modified
- 2022-08-22 04:00:52
- 2 years 3 months ago
Additional Formats