In this work, we revisit the Linear Quadratic Gaussian (LQG) optimal control problem from a behavioral perspective. Motivated by the suitability of behavioral models for data-driven control, we begin with a reformulation of the LQG problem in the space of input-output behaviors and obtain a complete characterization of the optimal solutions. In particular, we show that the optimal LQG controller can be expressed as a static behavioral-feedback gain, thereby eliminating the need for dynamic state estimation characteristic of state space methods. The static form of the optimal LQG gain also makes it amenable to its computation by gradient descent, which we investigate via numerical experiments. Furthermore, we highlight the advantage of this approach in the data-driven control setting of learning the optimal LQG controller from expert demonstrations.
IEEE 61st Conference on Decision and Control (CDC)