2016
DOI: 10.1137/141000737
|View full text |Cite
|
Sign up to set email alerts
|

A Fast Active Set Block Coordinate Descent Algorithm for $\ell_1$-Regularized Least Squares

Abstract: Abstract. The problem of finding sparse solutions to underdetermined systems of linear equations arises in several applications (e.g. signal and image processing, compressive sensing, statistical inference). A standard tool for dealing with sparse recovery is the ℓ 1 -regularized least-squares approach that has been recently attracting the attention of many researchers.In this paper, we describe an active set estimate (i.e. an estimate of the indices of the zero variables in the optimal solution) for the consi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
42
0
8

Year Published

2016
2016
2022
2022

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(51 citation statements)
references
References 33 publications
(76 reference statements)
1
42
0
8
Order By: Relevance
“…Any variable j ∈ U F for which this equality does not hold, is removed from the set U F and added to U A . The set A k is then updated according to (9) and a new trial step is recomputed by solving (10). We repeat this corrective cycle until all predictions are correct and the trial pointx k satisfies (11).…”
Section: The Proposed Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…Any variable j ∈ U F for which this equality does not hold, is removed from the set U F and added to U A . The set A k is then updated according to (9) and a new trial step is recomputed by solving (10). We repeat this corrective cycle until all predictions are correct and the trial pointx k satisfies (11).…”
Section: The Proposed Algorithmmentioning
confidence: 99%
“…Email: nitishkeskar2012@u.northwestern.edu SpaRSA and FISTA [3,10,37], and proximal Newton methods that compute a step by minimizing a piecewise quadratic model of (1) using (for example) a coordinate descent iteration [5,16,19,23,26,29,32,38]. The proposed algorithm also differs from methods that solve (1) by reformulating it as a bound constrained problem [12,18,27,28,33,35], and from recent methods that are specifically designed for the case when f is a convex quadratic [9,30,34].…”
Section: Introductionmentioning
confidence: 99%
“…Term II: Let us rewrite term II as The desired result (14) follows readily combining (16), (17), and (18).…”
Section: A On the Random Sampling And Its Propertiesmentioning
confidence: 99%
“…Our focus is on problems with a huge number of variables, as those that can be encountered, e.g., in machine learning, compressed sensing, data mining, tensor factorization and completion, network optimization, image processing, genomics, etc.. We refer the reader to [2]- [14] and the books [15], [16] as entry points to the literature.…”
Section: Introductionmentioning
confidence: 99%
“…In [4] and [6] these type of directions were considered in connection with Newton and semismooth Newton type updates, respectively. Active set strategies based on second order information have also been recently envisaged ( [31,34,40]). This paper targets the question whether some useful information can also be extracted from the special structure of the 1 -norm in order to design a second-order method.…”
Section: Introductionmentioning
confidence: 99%