2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00895
|View full text |Cite
|
Sign up to set email alerts
|

Image Inpainting With Learnable Bidirectional Attention Maps

Abstract: Most convolutional network (CNN)-based inpainting methods adopt standard convolution to indistinguishably treat valid pixels and holes, making them limited in handling irregular holes and more likely to generate inpainting results with color discrepancy and blurriness. Partial convolution has been suggested to address this issue, but it adopts handcrafted feature re-normalization, and only considers forward mask-updating. In this paper, we present a learnable attention map module for learning feature renormali… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
134
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 235 publications
(136 citation statements)
references
References 31 publications
1
134
1
Order By: Relevance
“…Nevertheless, we compared our results with those of the latest methods on 2000 test images in CelebA-HQ and 4000 images in the test set of Places2. In the experiments, the irregular mask dataset in Reference [14] were used, and the PSNR and SSIM values were compared, as shown in Tables 5 and 6, where CA [15] represents the generative image inpainting results with Contextual Attention (CVPR2018), PC [14] is the results of Image Inpainting for Irregular Holes Using Partial Convolutions (ECCV2018), EC [17] represents the results of EdgeConnect (ICCV2019), GC [16] represents the results of Free-Form Image Inpainting with Gated Convolution (ICCV2019), LBAM [21] is the results of Learnable Bidirectional Attention Maps (ICCV2019), and RN [23] is the results of Region Normalization for Image Inpainting(AAAI2020). Among them, the data of PC comes from Reference [23,32], while others are performed using the codes or pre-trained models provided by their authors.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Nevertheless, we compared our results with those of the latest methods on 2000 test images in CelebA-HQ and 4000 images in the test set of Places2. In the experiments, the irregular mask dataset in Reference [14] were used, and the PSNR and SSIM values were compared, as shown in Tables 5 and 6, where CA [15] represents the generative image inpainting results with Contextual Attention (CVPR2018), PC [14] is the results of Image Inpainting for Irregular Holes Using Partial Convolutions (ECCV2018), EC [17] represents the results of EdgeConnect (ICCV2019), GC [16] represents the results of Free-Form Image Inpainting with Gated Convolution (ICCV2019), LBAM [21] is the results of Learnable Bidirectional Attention Maps (ICCV2019), and RN [23] is the results of Region Normalization for Image Inpainting(AAAI2020). Among them, the data of PC comes from Reference [23,32], while others are performed using the codes or pre-trained models provided by their authors.…”
Section: Methodsmentioning
confidence: 99%
“…Yu et al [16] used the soft mask to instead the binary mask in the network to better represent the restored situation of image. Besides, Xie et al [21] reversed the masks in the encoder network and put them in the decoder network to realize the update of damaged region. In addition, Yang et al [22] considered the structure information in image generation network to produce the realistic structural images.…”
Section: Image Inpainting Based On Deep Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Wang et al [25] introduce a special multistage attention module that considers structure consistency and detail fineness. To generating fine-grained textures, Xie et al [26] and Yu et al [10] introduce the attention mechanism in image inpainting. Xie et al introduce learnable attention maps to update the mask dynamically.…”
Section: B Image Inpainting By Deep Generative Modelsmentioning
confidence: 99%
“…In [37], a learnable bidirectional attention maps (LBAM) for image inpainting is proposed. The method used FCN to conduct image inpainting.…”
Section: Related Workmentioning
confidence: 99%