2021
DOI: 10.3390/fi13070179
|View full text |Cite
|
Sign up to set email alerts
|

Fast Flow Reconstruction via Robust Invertible n × n Convolution

Abstract: Flow-based generative models have recently become one of the most efficient approaches to model data generation. Indeed, they are constructed with a sequence of invertible and tractable transformations. Glow first introduced a simple type of generative flow using an invertible 1×1 convolution. However, the 1×1 convolution suffers from limited flexibility compared to the standard convolutions. In this paper, we propose a novel invertible n×n convolution approach that overcomes the limitations of the invertible … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 25 publications
(31 reference statements)
0
1
0
Order By: Relevance
“…Deep learning-based image recognition studies have been recently achieving very accurate performance in visual applications, e.g. image classification [1], [2], [3], face recognition, [4], [5], [6], [7], [8], image synthesis [9], [10], [11], [12], [13], [14], action recognition [15], [16], semantic segmentation [17], [18]. However, these methods assume the testing images from the same distribution as the training images, therefore, these deep learning-based models are likely to fail when performing in real data in the new domains.…”
Section: Introductionmentioning
confidence: 99%
“…Deep learning-based image recognition studies have been recently achieving very accurate performance in visual applications, e.g. image classification [1], [2], [3], face recognition, [4], [5], [6], [7], [8], image synthesis [9], [10], [11], [12], [13], [14], action recognition [15], [16], semantic segmentation [17], [18]. However, these methods assume the testing images from the same distribution as the training images, therefore, these deep learning-based models are likely to fail when performing in real data in the new domains.…”
Section: Introductionmentioning
confidence: 99%