2021
DOI: 10.1109/tip.2020.3043093
|View full text |Cite
|
Sign up to set email alerts
|

Learning Spatial Attention for Face Super-Resolution

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
86
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 167 publications
(86 citation statements)
references
References 49 publications
0
86
0
Order By: Relevance
“…• SPARNet [24], which uses a spatial attention mechanism to focus the generation process on key face structure regions.…”
Section: Comparison With Other Published Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…• SPARNet [24], which uses a spatial attention mechanism to focus the generation process on key face structure regions.…”
Section: Comparison With Other Published Methodsmentioning
confidence: 99%
“…Further, a thresholdbased fusion and reconstruction module combines the candidate HR image to give the final SR prediction. Chen et al [24] proposed facial attention units(FAUs) which used a spatial attention mechanism to learn and focus on different face structures.…”
Section: Face Hallucinationmentioning
confidence: 99%
See 1 more Smart Citation
“…The generator tries to fool the discriminator by producing realistic fake images while the discriminator learns to distinguish real images from fake ones. GANs have been utilized to solve various images problems including image synthesis [14], image-to-image translation [15,16], image super-resolution [17,18], facial attribute editing [19,10,20] and expression manipulation [3,4,5].…”
Section: Related Workmentioning
confidence: 99%
“…The single image super-resolution performance on benchmark datasets has been boosted continuously by numerous proposed models [1,2,3,4,25], which usually achieve state-of-the-art performance via advanced network architecture or learning strategy. Some fundamental and classic network designs in SR models contain recursive learning [26], residual learning [27], dense connection [28], multipath learning [29] and attention mechanism [2,30,31]. Kim et al [26] proposed Deeply-recursive Convolutional Network (DRCN) to recursively learning high-level representations with a weight-shared module.…”
Section: Related Workmentioning
confidence: 99%