2020
DOI: 10.48550/arxiv.2001.03725
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Symmetric Skip Connection Wasserstein GAN for High-Resolution Facial Image Inpainting

Abstract: We propose a Symmetric Skip Connection Wasserstein Generative Adversarial Network (S-WGAN) for highresolution facial image inpainting. The architecture is an encoder-decoder with convolutional blocks, linked by skip connections. The encoder is a feature extractor that captures data abstractions of an input image to learn an end-to-end mapping from an input (binary masked image) to the ground-truth. The decoder uses the learned abstractions to reconstruct the image. With skip connections, S-WGAN transfers image… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1
1

Relationship

3
3

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 23 publications
0
11
0
Order By: Relevance
“…We trained our model with the generator and discriminator loss defined in section 3.1. We used a similar architecture proposed in [15] and applied loss weights (coefficients) to the generator loss. Our intention was to ensure that during training, the generator is punished more by increasing its loss weight of the foreground loss to learn structural and textural features to have an overall understanding of the semantic nature of the face region.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We trained our model with the generator and discriminator loss defined in section 3.1. We used a similar architecture proposed in [15] and applied loss weights (coefficients) to the generator loss. Our intention was to ensure that during training, the generator is punished more by increasing its loss weight of the foreground loss to learn structural and textural features to have an overall understanding of the semantic nature of the face region.…”
Section: Methodsmentioning
confidence: 99%
“…3 Architecture The design of our proposed network has an encoder-decoder, as the generator (G θ ) and a discriminator (D θ ), to achieve realistic results. The encoder architecture is based on [15] with the exception of the foreground segmentation mask (henceforth, foreground mask) and masked image as input. During training, we keep the foreground mask in tacked and do not pass it through convolutions.…”
Section: Foreground Facial Inpainting Frameworkmentioning
confidence: 99%
“…By adding the activation layer of a previous layer, to a deeper layer within the network, a residual connection is achieved. In previous works [9,57,17], feature extraction and propagation often fail with large portions of the background due to low level capture and poor transition to the decoder. We consider maxpooling, an operation that highlights the most present feature of an image patch and calculates its maximum value.…”
Section: Recursive Residual Transition Layermentioning
confidence: 99%
“…The learning-based methods, popular known as deep generative neural networks, have become the state of the art, based on their ability to learn distribution with regards to context. These approaches [9,10,11,12,13,14,15,16,17,18,19] use convolutional neural network (CNN) within an encoder-decoder within a GAN-based network to generate realistic images. These algorithms with a wide range of parameters and layers learn to manage feature extraction, propagation, and regularisation.…”
Section: Introductionmentioning
confidence: 99%
“…In contrast, the second group of approaches (deep learning methods) [27,13,38,37,23,24,36,39,16] uses generative neural networks to hallucinate missing content of an image based on encoding the semantic context of the image into feature space for realistic output by a decoder. This is done through convolutions which is an operation that extracts feature maps by evaluating the dot product between a kernel and each location of the input image.…”
Section: Introductionmentioning
confidence: 99%