2020
DOI: 10.1109/access.2020.2987815
|View full text |Cite
|
Sign up to set email alerts
|

Growing and Pruning Selective Ensemble Regression for Nonlinear and Nonstationary Systems

Abstract: For a selective ensemble regression (SER) scheme to be effective in online modeling of fastarriving nonlinear and nonstationary data, it must not only be capable of maintaining a most up to date and diverse base model set but also be able to forget old knowledge no longer relevant. Based on these two important principles, in this paper, we propose a novel growing and pruning SER (GAP-SER) for timevarying nonlinear data. Specifically, during online operation, newly emerging process state is automatically identi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6

Relationship

4
2

Authors

Journals

citations
Cited by 7 publications
(19 citation statements)
references
References 55 publications
0
19
0
Order By: Relevance
“…The significance level α µ and α Σ are usually set to small values, e.g., 0.05, 0.01, and they can be different according to the process data characteristics. Similar to our previous work [25], [26], the selection of window size W G is a trade-off between the adaptive ability of capturing the local characteristics and the accuracy of local model. The proposed local learning procedure is summarized in Algorithm 1.…”
Section: A Adaptive Local Learning Via Multivariate Statisticmentioning
confidence: 93%
See 4 more Smart Citations
“…The significance level α µ and α Σ are usually set to small values, e.g., 0.05, 0.01, and they can be different according to the process data characteristics. Similar to our previous work [25], [26], the selection of window size W G is a trade-off between the adaptive ability of capturing the local characteristics and the accuracy of local model. The proposed local learning procedure is summarized in Algorithm 1.…”
Section: A Adaptive Local Learning Via Multivariate Statisticmentioning
confidence: 93%
“…During online operation, when the newest data sample {x(t next ), y(t next )} is available, the data window shift one sample ahead, and the corresponding learning procedure can then be carried out. Unlike our previous work for single-output modeling [26], we consider multi-output modeling via multivariate statistics. This local learning procedure automatically encodes a newly emerging process state in the memory as a new local linear model.…”
Section: A Adaptive Local Learning Via Multivariate Statisticmentioning
confidence: 99%
See 3 more Smart Citations