2017
DOI: 10.1007/978-3-319-58771-4_41
|View full text |Cite
|
Sign up to set email alerts
|

Learning Filter Functions in Regularisers by Minimising Quotients

Abstract: Learning approaches have recently become very popular in the field of inverse problems. A large variety of methods has been established in recent years, ranging from bi-level learning to high-dimensional machine learning techniques. Most learning approaches, however, only aim at fitting parametrised models to favourable training data whilst ignoring misfit training data completely. In this paper, we follow up on the idea of learning parametrised regularisation functions by quotient minimisation as established … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…It is important to emphasize that even for nonsmooth, nonconvex optimization there is a vast amount of recent publications, ranging from forward-backward, respectively proximaltype, schemes [8,9,10,49,50], over linearized proximal schemes [365,47,366,298], to inertial methods [299,309], primal-dual algorithms [361,267,279,34], scaled gradient projection methods [310], nonsmooth Gauß-Newton extensions [149,300] and nonlinear Eigenproblems [206,59,32,51,261,31]. We focus mainly on recent generalizations of the proximal gradient method and the linearized Bregman iteration for nonconvex functionals E in the following.…”
Section: Nonconvex Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…It is important to emphasize that even for nonsmooth, nonconvex optimization there is a vast amount of recent publications, ranging from forward-backward, respectively proximaltype, schemes [8,9,10,49,50], over linearized proximal schemes [365,47,366,298], to inertial methods [299,309], primal-dual algorithms [361,267,279,34], scaled gradient projection methods [310], nonsmooth Gauß-Newton extensions [149,300] and nonlinear Eigenproblems [206,59,32,51,261,31]. We focus mainly on recent generalizations of the proximal gradient method and the linearized Bregman iteration for nonconvex functionals E in the following.…”
Section: Nonconvex Optimizationmentioning
confidence: 99%
“…2017), linearized proximal schemes (Xu and Yin 2013, Bolte, Sabach and Teboulle 2014, Xu and Yin 2017, Nikolova and Tan 2017), inertial methods (Ochs, Chen, Brox and Pock 2014, Pock and Sabach 2016), primal–dual algorithms (Valkonen 2014, Li and Pong 2015, Moeller, Benning, Schönlieb and Cremers 2015, Benning, Knoll, Schönlieb and Valkonen 2015), scaled gradient projection methods (Prato et al. 2016), non-smooth Gauss–Newton extensions (Drusvyatskiy, Ioffe and Lewis 2016, Ochs, Fadili and Brox 2017), and nonlinear eigenproblems (Hein and Bühler 2010, Bresson, Laurent, Uminsky and Brecht 2012, Benning, Gilboa and Schönlieb 2016, Boţ and Csetnek 2017, Laurent, von Brecht, Bresson and Szlam 2016, Benning, Gilboa, Grah and Schönlieb 2017 c ). Here we focus mainly on recent generalizations of the proximal gradient method and the linearized Bregman iteration for non-convex functionals ; a treatment of all the algorithms mentioned above would be a subject for a survey paper in its own right.…”
Section: Advanced Issuesmentioning
confidence: 99%
“…We found this is a crucial factor to gain good performance in such non-convex minimization algorithms. In Benning et al (2017) a related iterative algorithm for learning regularizers by quotients of one-homogeneous functionals was proposed. Convergence of the algorithm was shown, based on the theory of Lojasiewicz (1963) and Bolte et al (2014).…”
Section: Introductionmentioning
confidence: 99%