Optimal Control and the Hamilton-Jacobi-Bellman PDE

Anand Deo
Phani Raj Lolakapuri
Friday, 3 Nov 2017, 17:15 to 18:15
A-201 (STCS Seminar Room)
The notion of control can be thought of as the process of selection of a policy to influence the dynamics of a system in order to achieve a desired objective. If the objective is to maximize (or minimize) a known payoff function which depends on the state of the system, by selection of an appropriate policy, the process is called optimal control.  In this talk, we will describe a precise mathematical formulation for the problem of deterministic optimal control. Under a mild set of conditions, we show that the optimal payoff solves a non-linear partial differential equation (PDE), describe a method to solve this PDE, and obtain a characterization of the optimal control policy as a result. To conclude, we give examples of some optimal control problems. Optimal control  problems have a wide range of applications in engineering, inventory theory, economics and classical mechanics. Only a basic background in calculus will be assumed for this talk.