2011
DOI: 10.1109/lsp.2011.2159373
|View full text |Cite
|
Sign up to set email alerts
|

RLS Algorithm With Convex Regularization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
168
0
6

Year Published

2013
2013
2017
2017

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 137 publications
(175 citation statements)
references
References 8 publications
1
168
0
6
Order By: Relevance
“…Many algorithms have been developed to solve the problem in (3) in the literature [12][13][14][15][16][17][18][19][20], where the mean square error (MSE) [21] criterion based on second-order statistics has been employed for these algorithms, which show their optimality when e is Gaussian noise. In practical applications, however the transmitted signals are distorted by not only Gaussian noise, but also other kinds of noise, such as burst noise and high noise.…”
Section: Introductionmentioning
confidence: 99%
“…Many algorithms have been developed to solve the problem in (3) in the literature [12][13][14][15][16][17][18][19][20], where the mean square error (MSE) [21] criterion based on second-order statistics has been employed for these algorithms, which show their optimality when e is Gaussian noise. In practical applications, however the transmitted signals are distorted by not only Gaussian noise, but also other kinds of noise, such as burst noise and high noise.…”
Section: Introductionmentioning
confidence: 99%
“…The work of [15] proposes an adaptive version of the greedy least squares method using partial orthogonalization to systems. The work of [16] modifies the RLS algorithm by using a general convex function of the system parameters, resulting in the l 0 -RLS and l 1 -RLS algorithms. In comparison to the LMS based algorithms, the RLS based algorithms have a faster convergence speed as well as yield more accurate parameter estimates.…”
Section: Introductionmentioning
confidence: 99%
“…Similar to [10], [16], the l 1 -norm of the parameter vector penalty is added to the RLS cost function. For tractability, we further approximate the l 1 -norm of the parameter vector penalty term as an adaptively weighted l 2 -norm of the parameter vector term, in which the weights are readily given by the inversion of the associated l 1 -norm of the parameter estimates that are currently available in the adaptive learning environment.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations