2016
DOI: 10.1007/s10994-016-5578-4
|View full text |Cite
|
Sign up to set email alerts
|

Fast and scalable Lasso via stochastic Frank–Wolfe methods with a convergence guarantee

Abstract: Frank-Wolfe (FW) algorithms have been often proposed over the last few years as efficient solvers for a variety of optimization problems arising in the field of machine learning. The ability to work with cheap projection-free iterations and the incremental nature of the method make FW a very effective choice for many large-scale problems where computing a sparse model is desirable. In this paper, we present a high-performance implementation of the FW method tailored to solve large-scale Lasso regression proble… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(18 citation statements)
references
References 35 publications
0
18
0
Order By: Relevance
“…The algorithm enjoys strong convergence guarantees and it is easy to implement in practice. Further algorithmic improvements could also be obtained by implementing the stochastic acceleration studied in [4]. Our experiments showed that the proposed method is competitive with the state of the art in terms of accuracy, with a small but systematic advantage when sparser models are required.…”
Section: Discussionmentioning
confidence: 91%
See 3 more Smart Citations
“…The algorithm enjoys strong convergence guarantees and it is easy to implement in practice. Further algorithmic improvements could also be obtained by implementing the stochastic acceleration studied in [4]. Our experiments showed that the proposed method is competitive with the state of the art in terms of accuracy, with a small but systematic advantage when sparser models are required.…”
Section: Discussionmentioning
confidence: 91%
“…One possible reason for this is the fact that the original proposal had a high computational cost, making it unattractive for large models. We reconsider this technique arguing that recent advancements in Lasso optimization make it possible to solve the problem efficiently using high-performance algorithms with strong theoretical guarantees [4]. Importantly, we show in Sect.…”
Section: Sparse Svm Approximations Via Kernelized Lassomentioning
confidence: 99%
See 2 more Smart Citations
“…One recent direction to achieve this is to replace the LMO with a randomized linear oracle in which the linear minimization is performed only over a random sample of the original atomic domain. This approach has proven to be highly successful on specific problems such as structured SVMs (Lacoste-Julien et al, 2013) and ℓ 1 -constrained regression (Frandi et al, 2016), however little is known in the general case. Is it possible to design a FW variant with a randomized oracle that achieves the same convergence rate (up to a constant factor) as the non-randomized variant?…”
Section: Introductionmentioning
confidence: 99%