2021
DOI: 10.1016/j.jmsy.2020.11.005
|View full text |Cite
|
Sign up to set email alerts
|

Utilizing uncertainty information in remaining useful life estimation via Bayesian neural networks and Hamiltonian Monte Carlo

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 53 publications
(18 citation statements)
references
References 27 publications
0
18
0
Order By: Relevance
“…Typically, Bayesian inference methods are adopted for quantifying the model prediction uncertainties (Atamuradov et al, 2017;Ramuhalli et al, 2020) because they naturally incorporate information about the target SSC with prior knowledge (e.g., past analysis results, expert opinion) (Coble et al, 2012). Several Bayesian uncertainty quantification approaches have been used in the literature, including BNs (see "Data-Driven Methods" Section), filtering algorithms (such as Kalman filter and particle filter, see "Statistical-Based Prognostics" Section), relevance vector machines (Saha and Goebel, 2008), Bayesian neural networks (Benker et al, 2020), and their variants. Some non-Bayesian approaches have also been proposed for use in the problem of uncertainty estimation, each of which is well suited to specific algorithms or applications.…”
Section: Uncertainty Quantification and Propagationmentioning
confidence: 99%
“…Typically, Bayesian inference methods are adopted for quantifying the model prediction uncertainties (Atamuradov et al, 2017;Ramuhalli et al, 2020) because they naturally incorporate information about the target SSC with prior knowledge (e.g., past analysis results, expert opinion) (Coble et al, 2012). Several Bayesian uncertainty quantification approaches have been used in the literature, including BNs (see "Data-Driven Methods" Section), filtering algorithms (such as Kalman filter and particle filter, see "Statistical-Based Prognostics" Section), relevance vector machines (Saha and Goebel, 2008), Bayesian neural networks (Benker et al, 2020), and their variants. Some non-Bayesian approaches have also been proposed for use in the problem of uncertainty estimation, each of which is well suited to specific algorithms or applications.…”
Section: Uncertainty Quantification and Propagationmentioning
confidence: 99%
“…For instance, ensemble approaches were applied for UQ in prognostics in [42] where, rather than simply training independent models, Bayesian model-averaging was also applied to each model in order to obtain multiple predictions for elements in the ensemble. UQ based on Bayesian neural networks and variational inference were only recently investigated in [43], [44] with relatively good results in terms of UQ. The goal of this work is to introduce a new class of methods, DGP models, that tries to integrate the benefits of DNNs into the well-understood Bayesian framework of GP regression.…”
Section: Uq In Data-driven Prognosticsmentioning
confidence: 99%
“…Chinomona et al, for example, applied long short-term memory neural networks to the problem of battery RUL estimation [16], Sun et al applied auto-encoder neural networks to predict the RUL of cutting tools [17] and Yang et al applied convolutional neural networks to the task of bearing RUL prediction [18]. Recent approaches tackle the issue of uncertainty quantification and utilization in DL applications to RUL prediction by applying Bayesian neural networks [9]. Although all the mentioned works demonstrated high accuracies in RUL prediction, they all have in common, that they need large, representative training data sets, which are often not available in real industrial applications [7].…”
Section: Related Workmentioning
confidence: 99%
“…After training, the unobserved sequences from the test data set served as input for the trained GPC model, which predicted the probability of class membership for each single observation within the test sequence up to the latest measurement, resulting in a sequence of HI values over time. which is in accordance with[10] and[9]. In order to account for statistical fluctuations in the results, the experiments were conducted ten times, each time selecting two different random engines for training the GPC model.…”
mentioning
confidence: 93%