2021
DOI: 10.1080/24725854.2021.1918804
|View full text |Cite
|
Sign up to set email alerts
|

Meta-modeling of heterogeneous data streams: A dual-network approach for online personalized fault prognostics of equipment

Abstract: We first drive the convergence of gradient descent of the prediction network parameter. Denote the loss function of the prediction network with respect parameter α as Em(α), the Hessians of Em(α) with respect to α is bounded by ρ1 is equivalent to gradient Lipschitz that has the following properties. 2 +1 +1 +1 +1 1

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…In parallel ensemble methods (e.g., bagging and random forest [RF]; Breiman, 2001), however, each base learner is built independently, and the base learners are generated in parallel. Moreover, base learners might be of the same type and lead to homogeneous ensembles (e.g., RF), or they may be from different types and lead to heterogeneous ensembles, for example, stacking (Khairalla et al., 2018) and meta‐learning (Ma & Fildes, 2021; Yu & Hua, 2022) with heterogeneous models. Furthermore, we may generate a large number of weak learners (e.g., boosting, bagging, and RF) or a few competitive base learners to combine (e.g., stacking, BMA, and meta‐learning; Leamer & Leamer, 1978).…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…In parallel ensemble methods (e.g., bagging and random forest [RF]; Breiman, 2001), however, each base learner is built independently, and the base learners are generated in parallel. Moreover, base learners might be of the same type and lead to homogeneous ensembles (e.g., RF), or they may be from different types and lead to heterogeneous ensembles, for example, stacking (Khairalla et al., 2018) and meta‐learning (Ma & Fildes, 2021; Yu & Hua, 2022) with heterogeneous models. Furthermore, we may generate a large number of weak learners (e.g., boosting, bagging, and RF) or a few competitive base learners to combine (e.g., stacking, BMA, and meta‐learning; Leamer & Leamer, 1978).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Here, data stream refers to the data arriving online and in a sequential, nearly continuous fashion (van Rijn et al., 2018). The goal of predictive modeling for data streams is to predict the value of new instances in the data stream, given some knowledge about the values of previous instances (van Rijn et al., 2018; Yu & Hua, 2022). Hence, it requires models that learn incrementally over the data stream, one pass at a time.…”
Section: Introductionmentioning
confidence: 99%