2017
DOI: 10.1049/iet-spr.2015.0218
|View full text |Cite
|
Sign up to set email alerts
|

l 0 ‐norm penalised shrinkage linear and widely linear LMS algorithms for sparse system identification

Abstract: In this study, the authors propose an l 0-norm penalised shrinkage linear least mean squares (l 0-SH-LMS) algorithm and an l 0-norm penalised shrinkage widely linear least mean squares (l 0-SH-WL-LMS) algorithm for sparse system identification. The proposed algorithms exploit the priori and the posteriori errors to calculate the varying step-size, thus they can adapt to the time-varying channel. Meanwhile, in the cost function they introduce a penalty term that favours sparsity to enable the applicability for … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0
1

Year Published

2017
2017
2025
2025

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 36 publications
0
8
0
1
Order By: Relevance
“…As a consequence, the transient MSD behavior of the sparsity-aware NSAF algorithms can be described by (28) with the recursions (18) and (26). However, it still requires the computation of the moments (18) and (26) in advance.…”
Section: Assumptionmentioning
confidence: 99%
See 1 more Smart Citation
“…As a consequence, the transient MSD behavior of the sparsity-aware NSAF algorithms can be described by (28) with the recursions (18) and (26). However, it still requires the computation of the moments (18) and (26) in advance.…”
Section: Assumptionmentioning
confidence: 99%
“…Since the matrices () k Η and () k Θ are also bounded (see [19], [24] for details) , we can use theorem 2 in [28] to obtain the mean square convergence condition of the sparsity-aware NSAF algorithms from (26). Specifically, the matrix 1 F given by (27)…”
Section: Stable Convergence Conditionsmentioning
confidence: 99%
“…The difference between the LMS and GAL is only in underlying filtering structure, LMS uses transversal structure and GAL uses lattice structure. The LMS algorithm has an inherent limitation that it can search local minima only but not global minima [8]. However, this limitation can be overcome by simultaneously initializing the search at multiple points.…”
Section: Lms Algorithmmentioning
confidence: 99%
“…Özellikle [16]'da ki çalışmada, artırılmış kompleks istatistik kavramı ortaya konulmuştur ve bu kapsamda hem kovaryans matrisi hem de sözde-kovaryans matrisi, artırılmış kovaryans matrisi içerisinde birleştirilmiştir. Artırılmış kompleks istatistik yani geniş lineer model yaklaşımı, LMS tabanlı adaptif filtreleme algoritmalarının tasarımlarında oldukça yaygın ve etkin bir şekilde kullanılmıştır [14,[17][18][19][20][21][22][23][24]. Örneğin, [14,[17][18][19][20][21][22][23][24]'de yer alan çalışmalarda; WL (veya artırılmış) kompleks değerli LMS (WL-CLMS) algoritması, kompleks değerli dairesel ve dairesel olmayan sinyallerin işlenebilmesi için önerilmiştir.…”
unclassified