2018
DOI: 10.48550/arxiv.1807.07554
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A geometric integration approach to nonsmooth, nonconvex optimisation

Abstract: The optimisation of nonsmooth, nonconvex functions without access to gradients is a particularly challenging problem that is frequently encountered, for example in model parameter optimisation problems. Bilevel optimisation of parameters is a standard setting in areas such as variational regularisation problems and supervised machine learning. We present efficient and robust derivative-free methods called randomised Itoh-Abe methods. These are generalisations of the Itoh-Abe discrete gradient method, a wellkno… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
7
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 46 publications
(64 reference statements)
0
7
0
Order By: Relevance
“…Bilevel learning with a DFO algorithm was previously considered [16], but there a different DFO method based on discrete gradients was used, and was applied to nonsmooth problems with exact lower-level evaluations. In [16], only up to two parameters were learned, whereas here we demonstrate our approach is capable of learning many more. Our numerical results include examples with up to 64 parameters.…”
Section: Contributionsmentioning
confidence: 99%
See 3 more Smart Citations
“…Bilevel learning with a DFO algorithm was previously considered [16], but there a different DFO method based on discrete gradients was used, and was applied to nonsmooth problems with exact lower-level evaluations. In [16], only up to two parameters were learned, whereas here we demonstrate our approach is capable of learning many more. Our numerical results include examples with up to 64 parameters.…”
Section: Contributionsmentioning
confidence: 99%
“…A by-now common strategy to learn parameters of a variational regularization model from data is bilevel learning, see e.g. [11][12][13][14][15][16][17] and references in [4] . Given labelled data (x i , y i ) i=1,...,n we find parameters θ ∈ Θ ⊂ R m by solving the upper-level The lower-level objective Φ i,θ could be of the form Φ i,θ (x) = D(Ax, y i ) + θR(x) as in (1.1) but we will not restrict ourselves to this special case.…”
mentioning
confidence: 99%
See 2 more Smart Citations
“…In the case of the Itoh-Abe discrete gradient, the rates have an O(n 1/2 ) dependence on the problem size n; in Lemma 1 we show that the rates can be improved when V has sparsely connected unknowns. A convergence result for Itoh-Abe type discrete gradient schemes applied to non-differentiable, non-convex problems is presented in [27], with applications to parameter estimation problems.…”
Section: Introductionmentioning
confidence: 99%