2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00577
|View full text |Cite
|
Sign up to set email alerts
|

Generative Image Inpainting with Contextual Attention

Abstract: Figure 1: Example inpainting results of our method on images of natural scene, face and texture. Missing regions are shown in white. In each pair, the left is input image and right is the direct output of our trained generative neural networks without any post-processing. AbstractRecent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
2,162
2
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 2,210 publications
(2,298 citation statements)
references
References 43 publications
5
2,162
2
1
Order By: Relevance
“…The approach works well for highresolution semantic inpainting. Yu et al(2018b) propose a two-stage coarse-to-fine architecture to generate and refine the inpainting results, where the coarse network makes an initial estimation, and the refinement network takes the initialization to produce finer results. Besides, at the refinement stage, a novel module termed as Contextual Attention is designed to explicitly borrowing information from the surroundings of the missing regions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The approach works well for highresolution semantic inpainting. Yu et al(2018b) propose a two-stage coarse-to-fine architecture to generate and refine the inpainting results, where the coarse network makes an initial estimation, and the refinement network takes the initialization to produce finer results. Besides, at the refinement stage, a novel module termed as Contextual Attention is designed to explicitly borrowing information from the surroundings of the missing regions.…”
Section: Related Workmentioning
confidence: 99%
“…Datasets and Baslines We evaluate our approach on three datasets of CelebA (Liu et al 2015), Places2 (Zhou et al 2017) and Facade (Tyleček andŠára 2013), and compare the results with the following state-of-the-art methods both qualitatively and quantitatively: -GL: proposed by Iizuka, Simo-Serra, and Ishikawa (2017), which uses two discriminators to ensure global and local consistency of the generated image. -CA: proposed by Yu et al(2018b), which leverages a coarse-to-fine architecture with a contextual attention layer to produce and refine the inpainting results. -PEN-Net: proposed by Zeng et al(2019), which adopts a pyramid context encoder to fill missing regions with features of both image-level and feature-level.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…Although CGAN was initially designed for class-conditioned image generation by setting y as the class label of the image, several types of conditioning information can apply such as a full image for image-to-image translation [4] or partial image as in inpainting [18]. CGAN-based inpainting methods rely on generating a patch that will fill up a structured missing part of the image and achieve impressive results.…”
Section: Image Reconstruction With Gan In Related Workmentioning
confidence: 99%
“…Another valuable work is [34] that presents a novel contextual attention layer to explicitly attend on related feature patches at distant spatial locations. [9] uses a stack of partial convolution layers and mask updating steps to perform image inpainting using an autoencoder without adversarial learning.…”
Section: Related Workmentioning
confidence: 99%
“…The Free-form inpainting model: the generator has the same architecture as [34] followed by a refinement network without residual connections. The discriminator is a Patch-GAN that classifies image patches of size 70x70 as real or fake.…”
Section: Architectures and Trainingmentioning
confidence: 99%