2020
DOI: 10.48550/arxiv.2005.05224
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A derivative-free method for structured optimization problems

Abstract: Structured optimization problems are ubiquitous in fields like data science and engineering. The goal in structured optimization is using a prescribed set of points, called atoms, to build up a solution that minimizes or maximizes a given function. In the present paper, we want to minimize a black-box function over the convex hull of a given set of atoms, a problem that can be used to model a number of real-world applications. We focus on problems whose solutions are sparse, i.e., solutions that can be obtaine… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…Large-Scale DFO There have been several alternative approaches considered for improving the scalability of DFO. These often consider problems with specific structure which enable efficient model construction, such as partial separability [24,60], sparse Hessians [4], and minimization over the convex hull of finitely many points [27]. On the other hand, there is a growing body of literature on 'gradient sampling' techniques for machine learning problems.…”
Section: Existing Literaturementioning
confidence: 99%
“…Large-Scale DFO There have been several alternative approaches considered for improving the scalability of DFO. These often consider problems with specific structure which enable efficient model construction, such as partial separability [24,60], sparse Hessians [4], and minimization over the convex hull of finitely many points [27]. On the other hand, there is a growing body of literature on 'gradient sampling' techniques for machine learning problems.…”
Section: Existing Literaturementioning
confidence: 99%
“…This type of problem arises across a broad range of application areas (Conn et al, 2009;Audet & Hare, 2017), but has attracted particular recent attention in the learning community for problems such as black-box attacks (Chen et al, 2017;Ughi et al, 2019), hyperparameter tuning (Ghanbari & Scheinberg, 2017;Lakhmiri et al, 2020) and reinforcement learning (Mania et al, 2018;Choromanski et al, 2019). A cur-rent deficiency of DFO methods is their performance on large-scale problems, which is critical to their utility in machine learning; there have been several recent works aimed at improving the scalability of DFO (Bergou et al, 2019;Roberts, 2019;Porcelli & Toint, 2020;Cristofari & Rinaldi, 2020).…”
Section: Introductionmentioning
confidence: 99%