2020
DOI: 10.1109/access.2020.2976595
|View full text |Cite
|
Sign up to set email alerts
|

A Bayesian Optimization AdaBN-DCNN Method With Self-Optimized Structure and Hyperparameters for Domain Adaptation Remaining Useful Life Prediction

Abstract: The prediction of remaining useful life (RUL) of mechanical equipment provides a timely understanding of the equipment degradation and is critical for predictive maintenance of the equipment. In recent years, the applications of deep learning (DL) methods to predict equipment RUL have attracted much attention. There are two major challenges when applying the DL methods for RUL prediction: (1) It is difficult to select the prediction model structure and hyperparameters such as network depth, learning rate, batc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
16
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(18 citation statements)
references
References 37 publications
2
16
0
Order By: Relevance
“…In particular, four deep learning-based models are selected for performance benchmarking: a multi-objective deep belief networks ensemble (MODBNE) model uses multiple object evolutionary algorithm (MOEA) to train simultaneously many deep belief networks 46 ; a deep convolutional neural network (DCNN) developed uses many layers of CNNs to process the data and extract more complex features 49 ; a deep LSTM (DLSTM) model uses many stacks of LSTM to train a deep learning model 50 ; and a domain adaptive CNN (AdaBN-CNN) uses batch normalization to adapt the domain of the prediction allowing the network to use one dataset and test on another by the application of two different CNNs. 51 Table 8 summarizes the results retrieved from the four above-mentioned models as well as the results from the proposed probabilistic Bayesian RNN: probabilistic Bayesian LSTM for FD001, probabilistic Bayesian VRNN for FD002, probabilistic Bayesian GRU for FD003, and probabilistic Bayesian VRNN for FD004. The best result with the lowest RMSE value for each dataset is highlighted in bold.…”
Section: Resultsmentioning
confidence: 99%
“…In particular, four deep learning-based models are selected for performance benchmarking: a multi-objective deep belief networks ensemble (MODBNE) model uses multiple object evolutionary algorithm (MOEA) to train simultaneously many deep belief networks 46 ; a deep convolutional neural network (DCNN) developed uses many layers of CNNs to process the data and extract more complex features 49 ; a deep LSTM (DLSTM) model uses many stacks of LSTM to train a deep learning model 50 ; and a domain adaptive CNN (AdaBN-CNN) uses batch normalization to adapt the domain of the prediction allowing the network to use one dataset and test on another by the application of two different CNNs. 51 Table 8 summarizes the results retrieved from the four above-mentioned models as well as the results from the proposed probabilistic Bayesian RNN: probabilistic Bayesian LSTM for FD001, probabilistic Bayesian VRNN for FD002, probabilistic Bayesian GRU for FD003, and probabilistic Bayesian VRNN for FD004. The best result with the lowest RMSE value for each dataset is highlighted in bold.…”
Section: Resultsmentioning
confidence: 99%
“…Specifically, a network is first trained using the labeled source data and then transferred to the target domain by adjusting the BN statistics. There are applications of AdaBN in PHM, e.g., on condition diagnosis of bearings and gearboxes [50], [268] and on RUL prognosis of aircraft engines [295].…”
Section: Other Approachesmentioning
confidence: 99%
“…12, only a few studies have been devoted to a deeper understanding of the above-mentioned TLRM scenario, and its related issues, via TL. For instance, with the aim of identifying the health conditions of Scenario References TIM [146], [148], [112], [28], [182], [29], [30], [31], [149], [147], [32], [150], [33], [84], [34], [183], [36], [37], [38], [196], [155], [40], [35], [157], [41], [25], [42], [159], [85], [43], [60], [46], [48], [49], [65], [190], [86], [87], [89], [50], [51], [52], [56], [81], [92], [58], [162],…”
Section: ) Application Categorizationmentioning
confidence: 99%