2007
DOI: 10.1016/j.acha.2007.02.002
|View full text |Cite
|
Sign up to set email alerts
|

Coordinate and subspace optimization methods for linear least squares with non-quadratic regularization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
151
0

Year Published

2008
2008
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 165 publications
(151 citation statements)
references
References 23 publications
0
151
0
Order By: Relevance
“…Assuming that the multiplication by the dictionary (and its adjoint) has a fast O(n log n) algorithm, the overall process is very fast and effective. We should also mention that fast IST-like algorithms were recently proposed by several authors [5], [22], [57], [3].…”
Section: Inputmentioning
confidence: 99%
“…Assuming that the multiplication by the dictionary (and its adjoint) has a fast O(n log n) algorithm, the overall process is very fast and effective. We should also mention that fast IST-like algorithms were recently proposed by several authors [5], [22], [57], [3].…”
Section: Inputmentioning
confidence: 99%
“…This strategy can be generalized with the sequential subspace optimization (SESOP) [32], [33] for further accelerations, e.g. PCD-SESOP [11], [33] and PCD-SESOP-MM [34]. Variable splitting technique [35], [36] (also known as separable surrogate functionals (SSF) [33]) provides yet another powerful tool to minimize functions that consist of summation of two terms which are of different nature, e.g.…”
Section: B Approaches To Solve the Problemmentioning
confidence: 99%
“…The strategy of these algorithms is to use smartly-chosen descent directions, that are combinations of the previous two iterates' results. This strategy can be generalized with the sequential subspace optimization (SESOP) [32], [33] for further accelerations, e.g. PCD-SESOP [11], [33] and PCD-SESOP-MM [34].…”
Section: B Approaches To Solve the Problemmentioning
confidence: 99%
“…However, the classical proximal gradient algorithm has been regarded as a slower until the exciting progress in recent years. The "accelerated" approaches mainly base on an extrapolation step which relies on not only the current point, but also two or more previous computed iterations (e.g., [14,15]). Other dramatic algorithms can refer to Nesterov [16], Beck & Teboulle [6], Tseng [17], O'Donoghue & Candès [18], and references therein.…”
Section: Introductionmentioning
confidence: 99%