2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00335
|View full text |Cite
|
Sign up to set email alerts
|

Controlling Neural Networks via Energy Dissipation

Abstract: The last decade has shown a tremendous success in solving various computer vision problems with the help of deep learning techniques. Lately, many works have demonstrated that learning-based approaches with suitable network architectures even exhibit superior performance for the solution of (ill-posed) image reconstruction problems such as deblurring, super-resolution, or medical image reconstruction. The drawback of purely learning-based methods, however, is that they cannot provide provable guarantees for th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 43 publications
0
8
0
Order By: Relevance
“…The results by cVAE, LGD, and GM3 are comparable, at least visually. The ability to reconstruct tumours further indicates that cVAE does not miss out on important features that are not present in the training data, indicating a certain degree of robustness of the cVAE framework, so long as the signal is sufficiently strong, since many deep learning-based methods tend to miss the tumour due to the bias induced by tumour-free training data [57].…”
Section: Numerical Experiments and Discussionmentioning
confidence: 99%
“…The results by cVAE, LGD, and GM3 are comparable, at least visually. The ability to reconstruct tumours further indicates that cVAE does not miss out on important features that are not present in the training data, indicating a certain degree of robustness of the cVAE framework, so long as the signal is sufficiently strong, since many deep learning-based methods tend to miss the tumour due to the bias induced by tumour-free training data [57].…”
Section: Numerical Experiments and Discussionmentioning
confidence: 99%
“…[13], or combining optimizers with networks that have been trained individually [14,15]. The recent work of Moeller et al [16] trains a network to predict descent directions to a given energy in order to give provable convergence results on the learned optimizer.…”
Section: Related Workmentioning
confidence: 99%
“…Previous works have addressed similar problems by training networks that are safeguarded by a suitable cost function in order to guarantee bounds such as (3), see [1,2]. Unfortunately, the approach of Moeller et al [1] is limited to the case where d(Au, f ) is continuously differentiable in u, e.g. the investigated classical case of Gaussian noise where d(Au, f ) = ∥Au − f ∥ 2 .…”
Section: Introductionmentioning
confidence: 99%