In almost all PHM applications, driving highest possible performance (prediction accuracy and robustness) of PHM models (fault detection, fault diagnosis and prognostics) has been the top development priority, since PHM models’ performance directly impacts how much business value the PHM models can bring. However, recent research work in other domains, e.g., computer vision (CV), has shown that machine learning (ML) models, especially deep learning models, are vulnerable to adversarial attacks; that is, small deliberately-designed perturbations to the original samples can cause the model to make false predictions with high confidence. In fact, adversarial machine learning (AML) targeting security of ML algorithms against adversaries, has become an emerging ML topic and has attracted tremendous research attention in CV and NLP.
Yet, in the PHM community, not much attention has been paid to adversarial vulnerability or security of PHM models. We contend that the economic impact of adversarial attacks to a PHM model might be even bigger than that to hard perceptual problems and thus securing PHM models from adversarial attacks is as important as the PHM models themselves. Also, PHM models, since the data used by the models are primarily time-series sensor measurements, have their own unique characteristics and deserve special attention in securing them.
In this paper we attempt to explore the adversarial vulnerability of PHM models by conducting an initial case study. More specifically, we consider several unique characteristics associated with streaming time-series sensor measurements data in developing attack strategies for attacking PHM models. We hope our initial study here can shed some light on and stimulate more research interests in the area of PHM models’ security.