Deterministic and Stochastic Optimal Control
This book may be regarded as consisting of two parts. In Chapters I-IV we pre sent what we regard as essential topics in an introduction to deterministic optimal control theory. This material has been used by the authors for one semester graduate-level courses at Brown University and the University of Kentucky. The simplest problem in calculus of variations is taken as the point of departure, in Chapter I. Chapters II, III, and IV deal with necessary conditions for an opti mum, existence and regularity theorems for optimal controls, and the method of dynamic programming. The beginning reader may find it useful first to learn the main results, corollaries, and examples. These tend to be found in the earlier parts of each chapter. We have deliberately postponed some difficult technical proofs to later parts of these chapters. In the second part of the book we give an introduction to stochastic optimal control for Markov diffusion processes. Our treatment follows the dynamic pro gramming method, and depends on the intimate relationship between second order partial differential equations of parabolic type and stochastic differential equations. This relationship is reviewed in Chapter V, which may be read inde pendently of Chapters I-IV. Chapter VI is based to a considerable extent on the authors' work in stochastic control since 1961. It also includes two other topics important for applications, namely, the solution to the stochastic linear regulator and the separation principle.
What people are saying - Write a review
We haven't found any reviews in the usual places.
The Euler Equation Extremals
The Jacobi Necessary Condition 12
50 other sections not shown
Other editions - View all
admissible apply assume assumptions boundary bounded brownian motion calculus of variations called Chap chapter compact consider constant control law convex corresponding curve defined definition denote dependence diffusion discussion dynamic programming estimate Example exists expected extremal feedback control finite fixed formula function given gives Hence holds implies independent initial conditions initial data integral interval Lemma linear regulator mapping matrix mean measurable method minimizing Moreover necessary conditions nonanticipative notation observations obtained operator optimal control problem partial derivatives partial differential equation particular performance Pontryagin's principle probability Proof properties prove random vector regular Remark replaced respect satisfies sequence shown side solution space stochastic differential stochastic differential equations subset sufficient Suppose tends terminal Theorem Theorem 4.1 trajectory uniformly unique variables vector