2021
DOI: 10.1109/tsmc.2019.2915663
|View full text |Cite
|
Sign up to set email alerts
|

Logarithmic Hyperbolic Cosine Adaptive Filter and Its Performance Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
9

Relationship

2
7

Authors

Journals

citations
Cited by 85 publications
(13 citation statements)
references
References 36 publications
0
13
0
Order By: Relevance
“…According to (12), the estimated output is given byf (u) = ( z ) T z(u) with z being the weight vector in the feature space. Thus, the proposed Nyström mapping based on k-means sampling can be applied in nonlinear adaptive filtering based on different cost functions, e.g., the correntropy [32], Cauchy kernel loss [33], and hyperbolic cosine loss [34], for online applications in the transformed feature space.…”
Section: Nyström Kernel Mappingmentioning
confidence: 99%
See 1 more Smart Citation
“…According to (12), the estimated output is given byf (u) = ( z ) T z(u) with z being the weight vector in the feature space. Thus, the proposed Nyström mapping based on k-means sampling can be applied in nonlinear adaptive filtering based on different cost functions, e.g., the correntropy [32], Cauchy kernel loss [33], and hyperbolic cosine loss [34], for online applications in the transformed feature space.…”
Section: Nyström Kernel Mappingmentioning
confidence: 99%
“…the proposed Nyström kernel mapping and the CG method can also be extended to the non-quadratic error based cost functions [32]- [34], e.g., correntropy, for combating non-Gaussian noises including large outliers efficiently.…”
Section: A Nyskcg-km Algorithmmentioning
confidence: 99%
“…K ERNEL adaptive filtering is a family of online kernel learning algorithms that has been widely studied in many fields such as nonlinear time series prediction [1], [2], [3], nonlinear channel equalization [4], [5] and system identification [6], [7]. The kernel least-mean-square (KLMS) algorithm has universal approximation ability and wellposedness property, where the basic idea is to transform the input data into a high-dimensional feature space using the reproducing kernel [8], [9], [10], [11], [12].…”
Section: Introductionmentioning
confidence: 99%
“…In addition, the logarithmic error loss including the Cauchy loss is another type of effective non-quadratic loss for non-Gaussian noises [16]- [20]. It is worth noting that the logarithmic loss can provide better performance than C-Loss in specific environments [17], [18].…”
Section: Introductionmentioning
confidence: 99%