Damage diagnosis has become a valuable tool for asset management, enhanced by advances in sensor technologies that allows for system monitoring and providing massive amount of data for use in health state diagnosis. However, when dealing with massive data, manual feature extraction is not always a suitable approach as it is labor intensive requiring the intervention of domain experts with knowledge about the relevant variables that govern the system and their impact on its degradation process. To address these challenges, convolutional neural networks (CNNs) have been recently proposed to automatically extract features that best represent a system’s degradation behavior and are a promising and powerful technique for supervised learning with recent studies having shown their advantages for feature identification, extraction, and damage quantification in machine health assessment. Here, we propose a novel deep CNN-based approach for structural damage location and quantification, which operates on images generated from the structure’s transmissibility functions to exploit the CNNs’ image processing capabilities and to automatically extract and select relevant features to the structure’s degradation process. These feature maps are fed into a multilayer perceptron to achieve damage localization and quantification. The approach is validated and exemplified by means of two case studies involving a mass-spring system and a structural beam where training data are generated from finite element models that have been calibrated on experimental data. For each case study, the models are also validated using experimental data, where results indicate that the proposed approach delivers satisfactory performance and thus being an appropriate tool for damage diagnosis.
Driven by the development of machine learning (ML) and deep learning techniques, prognostics and health management (PHM) has become a key aspect of reliability engineering research. With the recent rise in popularity of quantum computing algorithms and public availability of first-generation quantum hardware, it is of interest to assess their potential for efficiently handling large quantities of operational data for PHM purposes. This paper addresses the application of quantum kernel classification models for fault detection in wind turbine systems (WTSs). The analyzed data correspond to low-frequency SCADA sensor measurements and recorded SCADA alarm logs, focused on the early detection of pitch fault failures. This work aims to explore potential advantages of quantum kernel methods, such as quantum support vector machines (Q-SVMs), over traditional ML approaches and compare principal component analysis (PCA) and autoencoders (AE) as feature reduction tools. Results show that the proposed quantum approach is comparable to conventional ML models in terms of performance and can outperform traditional models (random forest, k-nearest neighbors) for the selected reduced dimensionality of 19 features for both PCA and AE. The overall highest mean accuracies obtained are 0.945 for Gaussian SVM and 0.925 for Q-SVM models.
Sensor monitoring networks and advances in big data analytics have guided the reliability engineering landscape to a new era of big machinery data. Low-cost sensors, along with the evolution of the internet of things and industry 4.0, have resulted in rich databases that can be analyzed through prognostics and health management (PHM) frameworks. Several data-driven models (DDMs) have been proposed and applied for diagnostics and prognostics purposes in complex systems. However, many of these models are developed using simulated or experimental data sets, and there is still a knowledge gap for applications in real operating systems. Furthermore, little attention has been given to the required data preprocessing steps compared to the training processes of these DDMs. Up to date, research works do not follow a formal and consistent data preprocessing guideline for PHM applications. This paper presents a comprehensive step-by-step pipeline for the preprocessing of monitoring data from complex systems aimed for DDMs. The importance of expert knowledge is discussed in the context of data selection and label generation. Two case studies are presented for validation, with the end goal of creating clean data sets with healthy and unhealthy labels that are then used to train machinery health state classifiers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.