2019
DOI: 10.1109/access.2019.2936973
|View full text |Cite
|
Sign up to set email alerts
|

Multikernel Adaptive Filters Under the Minimum Cauchy Kernel Loss Criterion

Abstract: The Cauchy loss has been successfully applied in robust learning algorithms in the presence of large outliers, but it may suffer from performance degradation in complex nonlinear tasks. To address this issue, by transforming the original data into the reproducing kernel Hilbert spaces (RKHS) with the kernel trick, a novel Cauchy kernel loss is developed in such a kernel space. Based on the minimum Cauchy kernel loss criterion, the multikernel minimum Cauchy kernel loss (MKMCKL) algorithm is proposed by mapping… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 47 publications
0
4
0
Order By: Relevance
“…Nevertheless, it has been shown in recent times that nonconvex loss functions improve the generic applicability and robustness of learning, especially in situations where the data and noise distributions are unknown [ 42 ]. One such loss function is given in Equation (10) as a Cauchy kernel risk-sensitive loss (CKRSL), derived using a Gaussian kernel-adapted operator and following methods similar to those in [ 43 , 44 ]. …”
Section: Methodsmentioning
confidence: 99%
“…Nevertheless, it has been shown in recent times that nonconvex loss functions improve the generic applicability and robustness of learning, especially in situations where the data and noise distributions are unknown [ 42 ]. One such loss function is given in Equation (10) as a Cauchy kernel risk-sensitive loss (CKRSL), derived using a Gaussian kernel-adapted operator and following methods similar to those in [ 43 , 44 ]. …”
Section: Methodsmentioning
confidence: 99%
“…According to (12), the estimated output is given byf (u) = ( z ) T z(u) with z being the weight vector in the feature space. Thus, the proposed Nyström mapping based on k-means sampling can be applied in nonlinear adaptive filtering based on different cost functions, e.g., the correntropy [32], Cauchy kernel loss [33], and hyperbolic cosine loss [34], for online applications in the transformed feature space.…”
Section: Nyström Kernel Mappingmentioning
confidence: 99%
“…In [26], the quantized minimum kernel risk-sensitive loss (QMKRSL) algorithm was proposed to achieve better and robust performance of nonlinear filtering for outliers. Motivated by the studies in [27,28] on the Cauchy loss which has been successfully used in various robust learning applications, the multikernel minimum Cauchy kernel loss (MKMCKL) algorithm was reported in [29] showing the improved nonlinear filtering performance over counterpart single algorithm in the presence of extreme outliers. Recently, the kernel affine projection-like (KAPL) algorithm in RKHS was proposed and investigated for nonlinear channel equalization in scenarios of non-Gaussian noises [30].…”
Section: Introductionmentioning
confidence: 99%