2016
DOI: 10.1109/tvt.2016.2533664
|View full text |Cite
|
Sign up to set email alerts
|

Zero Attracting Recursive Least Squares Algorithms

Abstract: Abstract-The l1-norm sparsity constraint is a widely used technique for constructing sparse models. In this contribution, two zero-attracting recursive least squares algorithms, referred to as ZA-RLS-I and ZA-RLS-II, are derived by employing the l1-norm of parameter vector constraint to facilitate the model sparsity. In order to achieve a closed-form solution, the l1-norm of the parameter vector is approximated by an adaptively weighted l2-norm, in which the weighting factors are set as the inversion of the as… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 28 publications
(23 citation statements)
references
References 24 publications
0
23
0
Order By: Relevance
“…To possibly improve channel estimation performance with fast convergence speed, we find that developing sparse RLS algorithm is a promising solution. 21 By introducing sparse constraint function, eg, zero-attracting 10 and approximate ℓ 0 -norm (L0), 22 we propose 2 sparse RLS channel estimation algorithms: RLS using zero-attracting sparse constraint (RLS-ZA) and RLS using approximate ℓ 0 -norm sparse constraint (RLS-L0). First, the proposed sparse RLStype algorithms can achieve faster convergence speed than the LMS-type algorithms.…”
Section: Discussionmentioning
confidence: 99%
“…To possibly improve channel estimation performance with fast convergence speed, we find that developing sparse RLS algorithm is a promising solution. 21 By introducing sparse constraint function, eg, zero-attracting 10 and approximate ℓ 0 -norm (L0), 22 we propose 2 sparse RLS channel estimation algorithms: RLS using zero-attracting sparse constraint (RLS-ZA) and RLS using approximate ℓ 0 -norm sparse constraint (RLS-L0). First, the proposed sparse RLStype algorithms can achieve faster convergence speed than the LMS-type algorithms.…”
Section: Discussionmentioning
confidence: 99%
“…23,24 Thus, the proposed model is a new formulation by using the convex combination of EKFs, which not only enhances the tracking capability but also improves the error convergence property of the algorithm. 11,15,18 As stated in Equation (25), the state update equation to estimate the state variables is based on Kalman gain and estimation error. The cost function is normally MSE, which is quadratic in nature and thus minimization of cost function will lead to the optimal solution.…”
Section: Proposed Convex Combination Based Sparse Adaptive Estimationmentioning
confidence: 99%
“…Looking at the limitations of adaptive filters, current research works are oriented towards the development of adaptive estimation model with the inclusion of norm penalty in the cost function so that underlying sparseness will lead to faster convergence speed with reduced steady state mean square error (MSE). [11][12][13][14][15] Wu and Tong 14 proposed p-norm penalty in the cost function of the LMS-type algorithm to achieve better error performance than 0 and 1 norm. Hong et al 15 proposed ZA-RLS by adding the 1 norm of parameter vector penalty to RLS cost function for sparse channel estimation.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations