2015
DOI: 10.1007/s00034-015-9978-7
|View full text |Cite
|
Sign up to set email alerts
|

Steady-State Analysis of the Deficient Length Incremental LMS Adaptive Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
13
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 13 publications
(13 citation statements)
references
References 22 publications
0
13
0
Order By: Relevance
“…, j ∈ N i being the saddle point of (20). By following the above arguments in the converse way, it results that the saddle point of (20) is the fixed point of the iterations (21)- (24).□…”
Section: General Frameworkmentioning
confidence: 92%
See 2 more Smart Citations
“…, j ∈ N i being the saddle point of (20). By following the above arguments in the converse way, it results that the saddle point of (20) is the fixed point of the iterations (21)- (24).□…”
Section: General Frameworkmentioning
confidence: 92%
“…[34]): Let F: ℂ N → ( − ∞, ∞] be a convex function. Then x = P G (z) if and only if z ∈ x + ∂F(x), where ∂F is the sub-differential of F. Proof of Theorem 1: By applying Lemma 1 to (21)- (24) we have…”
Section: General Frameworkmentioning
confidence: 97%
See 1 more Smart Citation
“…Consequently, the previous theoretical results on the sufficient length NSAF algorithm do not necessarily apply to the deficient length situation. For such scenarios, the performance of many algorithms has been studied in the literature such as the LMS [19], [20], the frequency-domain block LMS (FBLMS) [21], and the distributed LMS [22], [23]. To the best of our knowledge, however, there are no available studies to accurately evaluate the performance of the deficient length NSAF algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…The performance of the incremental LMS algorithm in this realistic case is studied. 33 The incremental combination of RLS and LMS adaptive filters 34 is explored as design solutions to enhance the overall performance of an adaptive system. By comparing the incremental LMS and the steepest-descent algorithms for distributed estimation in the case of diminishing step size, we find an interesting fact that the incremental LMS outperforms the steepest descent at the initial stage of algorithm (before convergence), while the situation is conversed in the steady state (after convergence).…”
Section: Introductionmentioning
confidence: 99%