2019
DOI: 10.1016/j.neucom.2019.01.055
|View full text |Cite
|
Sign up to set email alerts
|

Learning from data streams using kernel least-mean-square with multiple kernel-sizes and adaptive step-size

Abstract: A learning task is sequential if its data samples become available over time; kernel adaptive filters (KAFs) are sequential learning algorithms. There are three main challenges in KAFs: (1) selection of an appropriate Mercer kernel; (2) the lack of an effective method to determine kernel-sizes in an online learning context; (3) how to tune the step-size parameter. This work introduces a framework for online prediction that addresses the latter two of these open challenges. The kernel-sizes, unlike traditional … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 38 publications
0
8
0
Order By: Relevance
“…In general, trial-and-error or cross-validation experiments are often required, which can be computationally demanding. To reduce the cost, some research work has been conducted, including Maximum Corr-entropy Criterion [7], Bayesian averaging rule [8], and stochastic gradient method [9], to optimize the kernel size during the training process.…”
Section: Kernel-based Learning Algorithmsmentioning
confidence: 99%
See 2 more Smart Citations
“…In general, trial-and-error or cross-validation experiments are often required, which can be computationally demanding. To reduce the cost, some research work has been conducted, including Maximum Corr-entropy Criterion [7], Bayesian averaging rule [8], and stochastic gradient method [9], to optimize the kernel size during the training process.…”
Section: Kernel-based Learning Algorithmsmentioning
confidence: 99%
“…In this section, the proposed AKSL algorithm is compared with five conventional sparse training algorithms, namely CLS-SVM [6], SSS [10], SLQ [11], CCS [22], and VSS [9]. We have briefly introduced the algorithm of CLS-SVM, SSS, and SLQ in Section 2.…”
Section: Comparison With Other Training Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…e existing literature focuses on the multiplekernel learning method and solves different issues such as kernel size and step size. We follow the same line of thought and take the existing methods [17,19,20] as the foundation of the proposed work to propose a KAF-based approach for close-price prediction.…”
Section: Introductionmentioning
confidence: 99%
“…Two well-known KAFs is the kernel least mean square (KLMS) algorithm [28] and the kernel recursive least-squares (KRLS) algorithm [29]. The KAF method has been widely applied in the field of system identification, noise elimination and time series prediction [30]- [33], while the applications in energy systems are still few and far between. Since KAF is favorite for its robustness, low complexity and high approximation capability, it is worthwhile to introduce KAF in the prognostics and health management of energy system.…”
Section: Introductionmentioning
confidence: 99%