2021
DOI: 10.48550/arxiv.2102.10846
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Expanding boundaries of Gap Safe screening

Cassio F. Dantas,
Emmanuel Soubies,
Cédric Févotte

Abstract: Sparse optimization problems are ubiquitous in many fields such as statistics, signal/image processing and machine learning. This has led to the birth of many iterative algorithms to solve them. A powerful strategy to boost the performance of these algorithms is known as safe screening: it allows the early identification of zero coordinates in the solution, which can then be eliminated to reduce the problem's size and accelerate convergence. In this work, we extend the existing Gap Safe screening framework by … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 23 publications
(42 reference statements)
0
2
0
Order By: Relevance
“…Fortunately, the screening techniques developed for the convex optimization problem can be used to alleviate this problem. Building on the idea of Dantas et al (2021), we propose safe screening methods for the problem (5) and further derive efficient primal updates. We only need the following common assumption about the objective function h(w) for the remaining context to establish our theoretical results.…”
Section: Feature Screeningmentioning
confidence: 99%
“…Fortunately, the screening techniques developed for the convex optimization problem can be used to alleviate this problem. Building on the idea of Dantas et al (2021), we propose safe screening methods for the problem (5) and further derive efficient primal updates. We only need the following common assumption about the objective function h(w) for the remaining context to establish our theoretical results.…”
Section: Feature Screeningmentioning
confidence: 99%
“…Strong rules (Tibshirani, 2011), on the other hand, are heuristics with no guarantee but able to prune a large number of features fast. A large body of work exists on screening rules for 1 -regularized regression (Wang et al, 2013;Liu et al, 2014;Fercoq et al, 2015;Ndiaye et al, 2017;Dantas et al, 2021), including some for logistic regression (Wang et al, 2014). However, little attention has been given to the 0 -regularized regression problem, where dimension reduction by screening rules can have substantially larger impact due to the higher computational burden for solving the non-convex regression problems.…”
Section: Introductionmentioning
confidence: 99%