2022
DOI: 10.1016/j.cag.2021.09.006
|View full text |Cite
|
Sign up to set email alerts
|

TG-Net: Reconstruct visual wood texture with semantic attention

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 12 publications
0
0
0
Order By: Relevance
“…Yang et al [20] used the output of the contextual encoder as the network input to improve the image inpainting quality by minimizing the feature differences in the image background, but this approach requires iteratively solving the multiscale optimization problem, which increases the calculations to some extent. Yu et al [21,22] and Chen et al [13] took a two-step method for the image inpainting task, first restoring the rough outline of the image by a coarse generator and then reconstructing a finer texture with a fine generator combined with attention, where Yu et al proposed gated convolution in Deepfill v2 [22] to replace the vanilla convolution in Deepfill v1 [21], solving the problem that vanilla convolution treats all pixels as valid pixels, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location in all layers, which makes the effect of inpainting better. Chen et al [13] proposed to normalize the front and back views separately to improve the missing region texture generation capability.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Yang et al [20] used the output of the contextual encoder as the network input to improve the image inpainting quality by minimizing the feature differences in the image background, but this approach requires iteratively solving the multiscale optimization problem, which increases the calculations to some extent. Yu et al [21,22] and Chen et al [13] took a two-step method for the image inpainting task, first restoring the rough outline of the image by a coarse generator and then reconstructing a finer texture with a fine generator combined with attention, where Yu et al proposed gated convolution in Deepfill v2 [22] to replace the vanilla convolution in Deepfill v1 [21], solving the problem that vanilla convolution treats all pixels as valid pixels, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location in all layers, which makes the effect of inpainting better. Chen et al [13] proposed to normalize the front and back views separately to improve the missing region texture generation capability.…”
Section: Related Workmentioning
confidence: 99%
“…Yu et al [21,22] and Chen et al [13] took a two-step method for the image inpainting task, first restoring the rough outline of the image by a coarse generator and then reconstructing a finer texture with a fine generator combined with attention, where Yu et al proposed gated convolution in Deepfill v2 [22] to replace the vanilla convolution in Deepfill v1 [21], solving the problem that vanilla convolution treats all pixels as valid pixels, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location in all layers, which makes the effect of inpainting better. Chen et al [13] proposed to normalize the front and back views separately to improve the missing region texture generation capability. However, these two methods only use the attention layer in the fine generator, do not obtain global modeling relationships well, and are prone to texture incoherence.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation