2021
DOI: 10.36001/phmconf.2021.v13i1.3057
|View full text |Cite
|
Sign up to set email alerts
|

On Adversarial Vulnerability of PHM algorithms – An initial Study

Abstract: In almost all PHM applications, driving highest possible performance (prediction accuracy and robustness) of PHM models (fault detection, fault diagnosis and prognostics) has been the top development priority, since PHM models’ performance directly impacts how much business value the PHM models can bring. However, recent research work in other domains, e.g., computer vision (CV), has shown that machine learning (ML) models, especially deep learning models, are vulnerable to adversarial attacks; that is, small … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 15 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?