2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00728
|View full text |Cite
|
Sign up to set email alerts
|

FSGAN: Subject Agnostic Face Swapping and Reenactment

Abstract: Figure 1: Face swapping and reenactment. Left: Source face swapped onto target. Right: Target video used to control the expressions of the face appearing in the source image. In both cases, our results appears in the middle. For more information please visit our website: https://nirkin.com/fsgan. AbstractWe present Face Swapping GAN (FSGAN) for face swapping and reenactment. Unlike previous work, FSGAN is subject agnostic and can be applied to pairs of faces without requiring training on those faces. To this e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
402
0
6

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 553 publications
(408 citation statements)
references
References 49 publications
0
402
0
6
Order By: Relevance
“…However, owing to the limited expressiveness of the 3D face dataset, methods [4,27] replying on 3D models often failed to reproduce the expressions accurately. Recently, FSGAN [26] proposed a two-stage architecture which first conduct the expression and posture transfer with a face reenactment network and then used another face inpainting network to blend the source face into the target image. A common problem for source-oriented methods is that they are sensitive to the input source image.…”
Section: Related Workmentioning
confidence: 99%
“…However, owing to the limited expressiveness of the 3D face dataset, methods [4,27] replying on 3D models often failed to reproduce the expressions accurately. Recently, FSGAN [26] proposed a two-stage architecture which first conduct the expression and posture transfer with a face reenactment network and then used another face inpainting network to blend the source face into the target image. A common problem for source-oriented methods is that they are sensitive to the input source image.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, future creations will be improved thanks to past experiences. This feature makes this misinformation procedure more dangerous, especially due to the emergence of mobile apps and computer programmes that allow users without computer programming training to produce deepfakes (Nirkin, Keller, & Hassner, 2019;Schwartz, 2018). Farid et al (2019, pp.…”
Section: Deepfake: a Novel Form Of Fake Newsmentioning
confidence: 99%
“…To keep structural consistency of faces in face reenactment, Wu et al [168] proposed a method that first maps the source face onto a boundary latent space, then transforms the source boundary to adapt to the target boundary, and finally decodes the transformed boundary to generate the reenacted face. The Face Swapping GAN (FSGAN) was proposed to swap and reenact faces in a subject agnostic manner [169]. An RNN-based approach which adjusts for pose and expression variations was proposed, assisted by a face completion network and a face blending network to generate realistic face swapping results.…”
Section: Face Reenactmentmentioning
confidence: 99%