Proceedings of the 28th ACM International Conference on Multimedia 2020
DOI: 10.1145/3394171.3413965
|View full text |Cite
|
Sign up to set email alerts
|

HiFaceGAN: Face Renovation via Collaborative Suppression and Replenishment

Abstract: Existing face restoration researches typically rely on either the image degradation prior or explicit guidance labels for training, which often lead to limited generalization ability over real-world images with heterogeneous degradation and rich background contents. In this paper, we investigate a more challenging and practical "dual-blind" version of the problem by lifting the requirements on both types of prior, termed as "Face Renovation"(FR). Specifically, we formulate FR as a semantic-guided generation pr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
80
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 113 publications
(81 citation statements)
references
References 40 publications
0
80
0
1
Order By: Relevance
“…We evaluate FID and KID by calculating the distribution distances between the BFR results and the real images from FFHQ [21] as well as the original test set, i.e., CelebA-HQ [20] or VGGFace2 [4], respectively. We compare our RMM with HiFaceGAN [51], DFDNet [29], PSFRGAN [6] and PULSE [37] on the CelebA-TB, VGGFace2-TB, LFW-Test and CFW-Test, as shown in Table 1. We conduct the evaluation using the same degraded input image for the compared methods, based on the public metric project [35].…”
Section: Comparisons With State-of-the-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We evaluate FID and KID by calculating the distribution distances between the BFR results and the real images from FFHQ [21] as well as the original test set, i.e., CelebA-HQ [20] or VGGFace2 [4], respectively. We compare our RMM with HiFaceGAN [51], DFDNet [29], PSFRGAN [6] and PULSE [37] on the CelebA-TB, VGGFace2-TB, LFW-Test and CFW-Test, as shown in Table 1. We conduct the evaluation using the same degraded input image for the compared methods, based on the public metric project [35].…”
Section: Comparisons With State-of-the-art Methodsmentioning
confidence: 99%
“…[49] adopts an Invertible Rescaling Net (IRN) to model the dual mapping of LQ and HQ images. HiFaceGAN [51] uses a collaborative suppression and replenishment (CSR) framework to achieve face renovation.…”
Section: Related Workmentioning
confidence: 99%
“…As the comparison between MPCNet and w/o AdaIN shown in Figure 7 and Table 2, it is evident that the AdaIN module can translate the content features to the desired style with effect and, thus, makes 7 and Table 2, it is evident that the SFT module can make full use of the parsing map priors to guide the face restoration branch to pay more attention to the essential facial parts reconstruction. To quantitatively compare MPCNet with other state-of-thearts methods: WaveletSRNet , Super-FAN (Bulat and Tzimiropoulos, 2018), DFDNet (Li et al, 2020a), HiFaceGAN (Yang et al, 2020), PSFRGAN (Chen et al, 2021), andGPEN (Yang et al, 2021), we first perform experiments on synthetic images. Following the comparison experiments…”
Section: Ablation Studymentioning
confidence: 99%
“…Indradi et al [21] used inception residual networks inside the GAN framework to boost performance and stabilize training. Hi-FaceGAN [22] uses a suppression module for the selection of informative features which are then used by a replenishment module for recovery detail. SiGAN [10] uses two identical generators with pair-wise contrastive loss based on the fact that different LR face images possess different identities.…”
Section: Face Hallucinationmentioning
confidence: 99%