2020 IEEE Winter Conference on Applications of Computer Vision (WACV) 2020
DOI: 10.1109/wacv45572.2020.9093375
|View full text |Cite
|
Sign up to set email alerts
|

Towards Photographic Image Manipulation with Balanced Growing of Generative Autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 17 publications
0
14
0
Order By: Relevance
“…Also, it is clear that the reconstructions are not sharp, and blurry objects are reconstructed. This is typical for a variational auto-encoder, and while many approaches exist to create sharper reconstructions (Makhzani et al, 2015 ; Heljakka et al, 2018 , 2020 ; Huang et al, 2018 ), we argue that this is not necessary for our case. As long as the generated observations are spatially correlated and the object properties such as size and color are correctly reconstructed, the generative model will be capable of working within the active inference framework.…”
Section: Discussionmentioning
confidence: 74%
“…Also, it is clear that the reconstructions are not sharp, and blurry objects are reconstructed. This is typical for a variational auto-encoder, and while many approaches exist to create sharper reconstructions (Makhzani et al, 2015 ; Heljakka et al, 2018 , 2020 ; Huang et al, 2018 ), we argue that this is not necessary for our case. As long as the generated observations are spatially correlated and the object properties such as size and color are correctly reconstructed, the generative model will be capable of working within the active inference framework.…”
Section: Discussionmentioning
confidence: 74%
“…Whereas Shannon (1951) used prediction to exploit an observer's knowledge about regularities and thereby measure the redundancy in signals, unsupervised learning uses prediction and other objectives to bootstrap knowledge about statistical regularities into DNNs. Unsupervised DNNs have demonstrated impressive abilities to extract higherorder regularities, which often do map onto physical properties of scenes and objects (Heljakka et al, 2020;Higgins et al, 2017;Kingma & Welling, 2013;Lotter et al, 2020;Salimans et al, 2017;van den Oord et al, 2016van den Oord et al, , 2017 and may be more robust to noise and image perturbations than regularities learned via supervised training (Hendrycks et al, 2019). The time is now ripe to revisit redundancy reduction with these newly potent learning objectives, some of which we review here.…”
Section: Unsupervised Learning: Acquiring Deep Knowledge Through Proximal Objectivesmentioning
confidence: 99%
“…Autoencoders and their variants (Heljakka et al, 2020;Higgins et al, 2017;Kingma & Welling, 2013;van den Oord et al, 2017) embody this approach to learning. They are trained to encode large databases of images in highly compacted representations-sometimes just a dozen or so numbers per image, as opposed to tens of thousands for the raw pixels.…”
Section: Compressionmentioning
confidence: 99%
“…An autoencoder (N E , N D ), where N E and N D are feed-forward ANNs called the encoder and the decoder respectively, is a model whose goal is to compress (encode) its inputs x ∈ R n I to low-dimensional vectors l = N E (x) ∈ R n L (again, n L < n I ) such that approximate decompression (decoding, reconstruction) can be achieved: N D (l) is close to x. A generative autoencoder (such as in [21,32]) is an autoencoder whose decoder is additionally trained to sample from the original distribution D-thus, essentially, a generative autoencoder performs both the tasks of an autoencoder and a GAN. For a well-trained generative autoencoder, we may assume both l ∼ D L ⇒ N D (l) ∼ D and…”
Section: Generative Modelsmentioning
confidence: 99%