2019
DOI: 10.48550/arxiv.1901.11352
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Learning for Inverse Problems: Bounds and Regularizers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Regularization with an Extra Penalty Some works have tried to maintain the weight constraints using an additional penalty on the objective function, which can be viewed as a regularization. This regularization technique is mainly used for learning the weight matrices with orthogonality constraints, for its efficiency in computation [4], [16], [146], [154], [155]. Orthogonal regularization methods have demonstrated improved performance in image classification [16], [154], [156], [157], resisting attacks from adversarial examples [158], neural photo editing [159] and training GANs [23], [30].…”
Section: Training With Constraintsmentioning
confidence: 99%
“…Regularization with an Extra Penalty Some works have tried to maintain the weight constraints using an additional penalty on the objective function, which can be viewed as a regularization. This regularization technique is mainly used for learning the weight matrices with orthogonality constraints, for its efficiency in computation [4], [16], [146], [154], [155]. Orthogonal regularization methods have demonstrated improved performance in image classification [16], [154], [156], [157], resisting attacks from adversarial examples [158], neural photo editing [159] and training GANs [23], [30].…”
Section: Training With Constraintsmentioning
confidence: 99%
“…However, the initial orthogonality can be broken down and is not necessarily sustained throughout training [71]. Previous works have tried to maintain the orthogonal weight matrix by imposing an additional orthogonality penalty on the objective function, which can be viewed as a 'soft orthogonal constraint' [51,66,71,5,3]. These methods show improved performance in image classification [71,76,40,5], resisting attacks from adversarial examples [13], neural photo editing [11] and training generative adversarial networks (GAN) [10,47].…”
Section: Introductionmentioning
confidence: 99%