2016 IEEE Statistical Signal Processing Workshop (SSP) 2016
DOI: 10.1109/ssp.2016.7551811
|View full text |Cite
|
Sign up to set email alerts
|

Efficient KLMS and KRLS algorithms: A random fourier feature perspective

Abstract: We present a new framework for online Least Squares algorithms for nonlinear modeling in RKH spaces (RKHS). Instead of implicitly mapping the data to a RKHS (e.g., kernel trick), we map the data to a finite dimensional Euclidean space, using random features of the kernel's Fourier transform. The advantage is that, the inner product of the mapped data approximates the kernel function. The resulting "linear" algorithm does not require any form of sparsification, since, in contrast to all existing algorithms, the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(23 citation statements)
references
References 16 publications
0
23
0
Order By: Relevance
“…This increase makes this model unsuitable for real-time applications and large-scale networks. The growing dimensionality of the dictionary is a well-known issue in single-node kernel methods [40], [41], [49]- [53], where several solutions have been Algorithm 1: GKLMS Input: step size µ Initialization: α 0 = empty vector; %Learning for each time instant n do Input:…”
Section: B Graph Kernel Lms Using Coherence-checkmentioning
confidence: 99%
See 1 more Smart Citation
“…This increase makes this model unsuitable for real-time applications and large-scale networks. The growing dimensionality of the dictionary is a well-known issue in single-node kernel methods [40], [41], [49]- [53], where several solutions have been Algorithm 1: GKLMS Input: step size µ Initialization: α 0 = empty vector; %Learning for each time instant n do Input:…”
Section: B Graph Kernel Lms Using Coherence-checkmentioning
confidence: 99%
“…In conventional DSP, several approaches to nonlinear system modeling exist in the literature [30]- [38]. In particular, methods based on reproducing kernel Hilbert spaces (RKHS) have gained popularity due to their efficacy and mathematical simplicity [36]- [53]. There is extensive literature on function estimation in RKHS for both single-and multi-node networks, see, e.g., [39]- [57].…”
Section: Introductionmentioning
confidence: 99%
“…The problem is a joint optimization of {λ k , θ k } N w k=1 , which is obviously not a convex problem. Meanwhile, for commonly used RFF-based KLMS algorithms [4], [14] in kernel adaptive filtering, samples {w i } N w i=1 drawn from the distribution p(w) and samples {d i } N w i=1 drawn from a uniform distribution may not be optimal for the given learning task because of the uncertainty of random sampling. A set of optimized {θ k } N w k=1 is essential to be obtained before the risk function can be a convex problem over {λ k } N w k=1 .…”
Section: Proposed Prffklms Algorithmmentioning
confidence: 99%
“…In particular, it is possible to show that this class of kernels can be easily approximated with very simple stochastic mappings. [37] was the first to apply this idea explicitly to kernel filters, and similar algorithms were independently reintroduced in [38]. Since Chapter 8 is entirely devoted to this idea, we will not go further into it.…”
Section: Expansion Over Random Basesmentioning
confidence: 99%