In this brief, we consider the constrained optimization problem underpinning model predictive control (MPC). We show that this problem can be decomposed into an unconstrained optimization problem with the same cost function as the original problem and a constrained optimization problem with a modified cost function and dynamics that have been precompensated according to the solution to the unconstrained problem. In the case of linear systems subject to a quadratic cost, the unconstrained solution has the familiar LQR solution and the constrained problem reduces to a minimum-norm projection. This implies that solving linear MPC problems is equivalent to precompensating a system using LQR and applying MPC to penalize only the control input. We propose to call this a constraint-separation principle and discuss its utility in the design of MPC schemes for application to constrained systems and the development of numerical solvers for MPC problems.