2014
DOI: 10.1109/tsp.2014.2333559
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Family of Adaptive Filtering Algorithms Based on the Logarithmic Cost

Abstract: Abstract-We introduce a novel family of adaptive filtering algorithms based on a relative logarithmic cost. The new family intrinsically combines the higher and lower order measures of the error into a single continuous update based on the error amount. We introduce important members of this family of algorithms such as the least mean logarithmic square (LMLS) and least logarithmic absolute difference (LLAD) algorithms that improve the convergence performance of the conventional algorithms. However, our approa… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
57
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 152 publications
(57 citation statements)
references
References 33 publications
0
57
0
Order By: Relevance
“…similar to the stable-NLMF algorithm [24,25]. In practice, in order to avoid a division by zero, we also propose the regularized stable-PNLMF algorithm modifying (6) such that…”
Section: Proportionate Update Approachmentioning
confidence: 99%
See 2 more Smart Citations
“…similar to the stable-NLMF algorithm [24,25]. In practice, in order to avoid a division by zero, we also propose the regularized stable-PNLMF algorithm modifying (6) such that…”
Section: Proportionate Update Approachmentioning
confidence: 99%
“…Hence, we propose the Krylovproportionate normalized least mean mixed norm (KPNLMMN) algorithm having a convex combination of the mean-square and the mean-fourth error objectives. In addition, we point out that the stability of the mean-fourth error based algorithms depends on the initial value of the adaptive filter weights, the input and noise power [23][24][25]. In order to enhance the stability of the introduced algorithms, we further introduce the stable-PNLMF and the stable-KPNLMF algorithms [24,25].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…, w M (n)] T stands for the estimate of the parameter vector, σ is the kernel width for CIM, and τ is a design parameter [22]. In (4), the LLAD term J LLAD (n) = |e(n)| − 1 τ ln (1 + τ |e(n)|) is robust to impulsive noise, of which the detailed analysis can be found in [22], and the CIM term (with smaller width) J CIM (n) is a sparsity-inducing term, and the two terms are balanced by a weight factor λ ≥ 0. Based on the adaptation cost (4), a gradient-based adaptive algorithm can be easily derived as follows:…”
Section: Sparse Llad Algorithmmentioning
confidence: 99%
“…However, the SA usually exhibits slower convergence performance especially for highly correlated input signals. To address this problem, the logarithmic least absolute deviation (LLAD) algorithm was recently developed in [22], which is robust to impulsive interferences and converges much faster than the original sign algorithm.…”
Section: Introductionmentioning
confidence: 99%