2012
DOI: 10.1109/tnnls.2011.2178446
|View full text |Cite
|
Sign up to set email alerts
|

Quantized Kernel Least Mean Square Algorithm

Abstract: In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
23
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 365 publications
(23 citation statements)
references
References 36 publications
0
23
0
Order By: Relevance
“…These methods, known as kernel sparsification methods, can be applied to the KTD algorithm to control the growth of the terms in the function expansion, also known as filter size. Popular examples of kernel sparsification methods are the approximate linear dependence (ALD) [19], Surprise criterion [32], Quantization approach [21], and the kernel distance based method [23]. The main idea of sparsification is to only consider a reduced set of samples, called the dictionary, to represent the function of interest.…”
Section: Online Sparsificationmentioning
confidence: 99%
See 1 more Smart Citation
“…These methods, known as kernel sparsification methods, can be applied to the KTD algorithm to control the growth of the terms in the function expansion, also known as filter size. Popular examples of kernel sparsification methods are the approximate linear dependence (ALD) [19], Surprise criterion [32], Quantization approach [21], and the kernel distance based method [23]. The main idea of sparsification is to only consider a reduced set of samples, called the dictionary, to represent the function of interest.…”
Section: Online Sparsificationmentioning
confidence: 99%
“…In the context of kernel based TD algorithms, sparsification methods such as approximate linear dependence (ALD) [19] have been applied to GPTD [15] and KLSTD [20]. A Quantization approach proposed in [21] has been used in KTD( λ ) [22]. In a similar flavor, the kernel distance based online sparsification method was proposed for a KTD algorithm in [23].…”
Section: Introductionmentioning
confidence: 99%
“…The single-hidden layer feedforward neural networks have been widely applied in machine learning [17][18][19][20][21][22] and ELM represents one of the recent successful approaches in machine learning. Compared with traditional computational intelligence techniques, ELM exhibits better generalization performance at a much faster learning speed and with fewer human interventions.…”
Section: Introductionmentioning
confidence: 99%
“…In such circumstances, either coherence detection or demodulation needs to estimate channel state information (CSI) [1]. In the framework of a Gaussian noise model, some effective channel estimation techniques have been studied [4][5][6][7][8][9][10]. In the assumptions of the non-Gaussian impulsive noise model, however, existing estimation techniques do not perform robustly due to heavy tailed impulsive interference.…”
Section: Introductionmentioning
confidence: 99%