Abstract-In this article we develop a systematic approach to enforce strong feasibility of probabilistically constrained stochastic model predictive control problems for linear discretetime systems under affine disturbance feedback policies. Two approaches are presented, both of which capitalize and extend the machinery of invariant sets to a stochastic environment. The first approach employs an invariant set as a terminal constraint, whereas the second one constrains the first predicted state. Consequently, the second approach turns out to be completely independent of the policy in question and moreover it produces the largest feasible set amongst all admissible policies. As a result, a trade-off between computational complexity and performance can be found without compromising feasibility properties. Our results are demonstrated by means of two numerical examples.
I. IOver the last two decades, the field of constrained model predictive control (MPC) has matured substantially. There is now a solid and very general theoretical foundation for stability and feasibility of nominal as well as robust MPC problems [14,18]. Nevertheless, the connection to another mature field, stochastic optimal control, is still not fully developed although there has been a considerable research effort in this direction over the last years.The basic ingredient of any receding horizon policy is finite horizon cost minimization, which is the first direction of recent research. This problem lies at the heart of stochastic optimal control theory and is known to be extremely difficult except for a few special cases (e.g., the linear quadratic problem). Thus, one typically seeks a suboptimal solution in a certain finite dimensional subset of admissible control polices. A popular choice is the affine disturbance feedback [10,16], which is also the framework of this article. Here, however, we are not primarily concerned with cost minimization itself, but rather closed-loop constraint satisfaction. A more general approach is that of a nonlinear disturbance feedback where decision variables are the coefficients of a linear combination of nonlinear basis functions of the disturbance [11]. In the presence of unbounded disturbances, the nonlinear functions must be bounded whenever bounded control inputs are required. In this article, however, we are dealing with bounded disturbances only. Closest to the nature of receding horizon control is the question of enforcing recursive feasibility of probabilistic constraints, which is also the topic of this article. The problem was extensively studied in a series of papers [3,4,5,13,17], where various types of constraints and disturbance properties were considered, and a number of techniques to tackle these problems were proposed. The common factor of these papers is the use of a perturbed linear state feedback (or pre-stabilization), which necessarily limits the number of degrees of freedom and as a consequence the resulting performance. In this article, in contrast, the use of affine disturban...