Most literature on the application of Non-Destructive Spectral Sensors (NDSS) reports proofs of concept limited to model calculation (calibration) and its application on a so-called independent data set (validation, or test). However, developing NDSS also requires proving that the performance obtained during this first validation remains valid when conditions change. This generic problem is referred to as robustness in chemometrics. When the measurement conditions change, the measured spectrum is subject to a deviation. The reproducibility of the model, and thus of the sensor, with respect to this deviation, defines its robustness. The application of NDSS involves a large number of processes, and thus deviation sources. Instrument cloning, between laboratory instruments or from a benchtop to an online device, is certainly the most concerning issue for deploying NDSS-based applications. This problem has been studied for many years in chemometrics, under the paradigm of calibration transfer, through geometric corrections of spectra, spectral spaces, or calibration model corrections. The same problem has been addressed in the machine learning community under the domain adaptation paradigm. Although all these issues have been addressed separately over the last twenty years, they all fall under the same topic, i.e., model maintenance under dataset shift. This paper aims to provide a vocabulary of concepts for formalizing the calibration model maintenance problem, reviewing recent developments on the subject, and categorizing prior work according to the proposed concepts.