The
model quality for a model predictive control (MPC) is critical
for the control loop performance. Thus, assessing the effect of model–plant
mismatch (MPM) is fundamental for performance assessment and monitoring
the MPC. This paper proposes a method for evaluating model quality
based on the investigation of closed-loop data and the nominal output
sensitivity function, which facilitates the assessment procedure for
the actual closed-loop performances. The effectiveness of the proposed
method is illustrated by a multivariable case study, considering linear
and nonlinear plants.
Systems with strong interactions
among the variables are frequent
in the chemical industry, and the use of model predictive control
(MPC) is a standard tool in these scenarios. However, model assessment
in this case is more complex when compared with fairly coupled systems,
since the interactions make the system more sensitive to the model
uncertainties. It means that, if the coupling is high, a small modeling
error in a single variable could be spread to the entire system. As
a result, all the controlled variables (CVs) of the MPC will have
their performance deteriorated and the root of the model problem will
not be evident. This paper presents a method of model assessment for
highly coupled systems. This is an extension of the method proposed
by Botelho et al. for model-plant mismatch
evaluation in MPC applications, based on the use of the diagonal elements
of the output sensitivity matrix. One of its advantages is that the
method does not require previous knowledge regarding the systems coupling
level. The effectiveness of the proposed method is illustrated by
two case studies: a high-purity distillation column and the Shell
heavy oil fractionator.
The longevity of each MPC application is strongly related to its performance maintenance. This work provides an overview of the methodologies available to fulfill this task, including a discussion about some special requirements of performance assessment methodologies for typical industrial MPC applications. The available methodologies were compared using these requirements. The best approaches were selected and compared to a new method proposed by the authors. These techniques have been applied in two case studies: the Shell benchmark process and the quadruple‐tank process. The results show that the control policy (setpoint, soft constraints, targets) followed in the MPC application should be the determining factor in selecting the methodology for performance assessment.
Poor
model quality in model predictive controller (MPC) is often
an important source of performance degradation. A key issue in MPC
model assessment is to identify whether the bad performance comes
from model–plant mismatches (MPM) or unmeasured disturbances
(UD). This paper proposes a method for distinguishing between such
degradation sources, where the main idea is to compare the statistical
distribution of the estimated nominal outputs with the actual modeling
error. The proposed approach relies on the assessment of three case
studies: a simple SISO Linear MPC and two multivariable cases, where
the linear controller is subject to a linear and nonlinear plant,
respectively. Results show that the proposed method provides a good
indicator of the model degradation source, even when both effects
are present but one of them is dominant.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.