2018
DOI: 10.1007/s11590-018-1325-z
|View full text |Cite
|
Sign up to set email alerts
|

“Active-set complexity” of proximal gradient: How long does it take to find the sparsity pattern?

Abstract: Proximal gradient methods have been found to be highly effective for solving minimization problems with non-negative constraints or ℓ 1regularization. Under suitable nondegeneracy conditions, it is known that these algorithms identify the optimal sparsity pattern for these types of problems in a finite number of iterations. However, it is not known how many iterations this may take. We introduce the notion of the "active-set complexity", which in these cases is the number of iterations before an algorithm is g… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
15
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(16 citation statements)
references
References 22 publications
1
15
0
Order By: Relevance
“…Now, we are ready to show that ( 23) holds for all sufficiently large iterations. Our analysis takes inspiration from the one in [42] for proximal gradient methods, where it is proved that the active set is identified in a neighborhood of the optimal solution under the non-degeneracy assumption. That neighborhood is defined in [42] by using a problem-dependent constant related on "the amount of degeneracy" of the optimal solution.…”
Section: Technical Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Now, we are ready to show that ( 23) holds for all sufficiently large iterations. Our analysis takes inspiration from the one in [42] for proximal gradient methods, where it is proved that the active set is identified in a neighborhood of the optimal solution under the non-degeneracy assumption. That neighborhood is defined in [42] by using a problem-dependent constant related on "the amount of degeneracy" of the optimal solution.…”
Section: Technical Resultsmentioning
confidence: 99%
“…In the literature, much effort has been devoted to proving identification properties of some algorithms for smooth optimization [3,5,6,7,8,9,10,11,19,21,24,46,48], non-smooth optimization [16,23,25,29,31,35,41,42,47,49], stochastic optimization [18,28,45] and derivative-free optimization [30]. Moreover, a wide class of methods, known as active-set methods, has been object of extensive study from decades (see, e.g., [4,13,14,17,20,22] and the references therein), making use of specific techniques to identify the so called active set, which is the set of constraints or variables that parametrizes a surface containing a solution.…”
Section: Introductionmentioning
confidence: 99%
“…It would be of great interest and importance to give a characterization of that neighborhood, in order to bound the maximum number of iterations required by the algorithm to identify the active set. Currently, this is an open problem and we think it may represent a possible line of future research, for example by adapting the complexity results given for ALGENCAN in [7], or extending some results on finite active-set identification given in the literature for specific classes of algorithms [8,10,22].…”
Section: Active-set Estimatementioning
confidence: 99%
“…Recently, explicit active set complexity bounds have been given for some of the methods listed above. Bounds for proximal gradient and block coordinate descent methods were analyzed in [35,34] under strong convexity assumptions on the objective. A more systematic analysis covering many gradient related proximal methods (like, e.g., accelerated gradient, quasi-Newton, and stochastic gradient proximal methods) was carried out in [37].…”
mentioning
confidence: 99%