Academic Press, 1970. — 255 p.
This book is designed to serve as a textbook at the advanced level for the study of the theory and applications of optimal control. Although a fairly complete treatment of the theory of optimal control is presented, the main emphasis is on the development of numerical algorithms for the computation of solutions to practical problems. Thus this book should be of interest to the practicing systems engineer as well as to the systems student.
The subject matter is developed in order of increasing complexity. First, the fundamental concepts of parameter optimization are introduced, i.e., the idea of the first and second variations and the related numerical algorithms, the gradient, and Newton-Raphson methods. Next, the optimization of multistage systems is considered. Formally the multistage problem can be treated as a parameter optimization problem. However, it is much more convenient to introduce the theory of dynamic programming, which is used as a theoretical basis throughout the remainder of the book. Finally, continuous optimal control problems are treated. Special chapters are devoted to problems with discontinuities and to the solution of two-point boundary value problems.