2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01152
|View full text |Cite
|
Sign up to set email alerts
|

Orthogonal Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
126
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 132 publications
(128 citation statements)
references
References 19 publications
2
126
0
Order By: Relevance
“…Besides, our discriminator, while capable of detecting local artifacts, provides little control for separating such artifacts and global semantic errors. Recent work on image synthesis may provide guidance onto designing better discriminative models [88], training procedures [103], [104], image parametrizations [23], [105], [106] or perceptual loss functions [53]. Second, we proposed a synthesis solution based on manipulating latent spaces within the generative model, but explicitly training the network to generate tileable textures may provide better results than our approach.…”
Section: Discussionmentioning
confidence: 99%
“…Besides, our discriminator, while capable of detecting local artifacts, provides little control for separating such artifacts and global semantic errors. Recent work on image synthesis may provide guidance onto designing better discriminative models [88], training procedures [103], [104], image parametrizations [23], [105], [106] or perceptual loss functions [53]. Second, we proposed a synthesis solution based on manipulating latent spaces within the generative model, but explicitly training the network to generate tileable textures may provide better results than our approach.…”
Section: Discussionmentioning
confidence: 99%
“…Orthogonalization method using Newton's iteration (ONI) [16] utilizes proxy matrices that can be transformed into orthogonal matrices as the parameters for training and the transformation is conducted through Newton's iteration. Wang et al [40] impose filter orthogonality on a convolutional layer based on the doubly block-Toeplitz matrix representation of the convolutional kernel and derive a new regularization term in the loss function for pursuing orthogonality. These methods can work with existing architectures to improve performance.…”
Section: Related Workmentioning
confidence: 99%
“…W ∈ R D ×D is the weight matrix of the layer and x ∈ R D is the output of the preceding layer. The linear operation in a convolutional layer may be represented as a doubly-block Toeplitz matrix [14]. Another way to perform the operation is to employ reshaping operators to represent the linear operator as a dense matrix applied to all the patches extracted from the input [15].…”
Section: A Weight Matrix As a Product Of Sparse Matricesmentioning
confidence: 99%