This is my preface. I am going to explain why I wrote this book and who it is for.
Chapter 1The challenges of dynamic programmingThe optimization of problems over time arises in many settings, ranging from the control of heating systems to managing entire economies. In between are examples including landing aircraft, purchasing new equipment, managing blood inventories, scheduling fleets of vehicles, selling assets, investing money in portfolios or just playing a game of tic-tac-toe or backgammon. These problems involve making decisions, then observing information, after which we make more decisions, and then more information, and so on. Known as sequential decision problems, they can be straightforward (if subtle) to formulate, but solving them is another matter.Dynamic programming has its roots in several fields. Engineering and economics tend to focus on problems with continuous states and decisions (these communities refer to decisions as controls), while the fields of operations research and artificial intelligence work primarily with discrete states and decisions (or actions). Problems that are modeled with continuous states and decisions (and typically in continuous time) are often addressed under the umbrella of "control theory" whereas problems with discrete states and decisions, modeled in discrete time, are studied at length under the umbrella of "Markov decision processes." Both of these subfields set up recursive equations that depend on the use of a state variable to capture history in a compact way. There are many high-dimensional problems such as those involving the allocation of resources that are generally studied using the tools of mathematical programming. Most of this work focuses on deterministic problems using tools such as linear, nonlinear or integer programming, but there is a subfield known as stochastic programming which incorporates uncertainty. Our presentation spans all of these fields.1 CHAPTER 1. THE CHALLENGES OF DYNAMIC PROGRAMMING