2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01993
|View full text |Cite
|
Sign up to set email alerts
|

Synthetic Generation of Face Videos with Plethysmograph Physiology

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(15 citation statements)
references
References 28 publications
0
15
0
Order By: Relevance
“…In this study, video files from UCLA dataset [13] were used to extract physiological in-formation from individuals. Each video is processed using the proposed method described in section 2.1, which returns the rPPG signal.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this study, video files from UCLA dataset [13] were used to extract physiological in-formation from individuals. Each video is processed using the proposed method described in section 2.1, which returns the rPPG signal.…”
Section: Resultsmentioning
confidence: 99%
“…For instance, Dasari et al [12] proposed a dataset that only contains dark skin tones, but the actual videos are not shared, only the color space values of the skin region of interest. Despite these challenges, Wang et al [13] recently proposed the largest known rPPG dataset, which includes a variety of participants with different skin tones. The dataset comprises 98 subjects and 489 videos of various skin tones, ages, genders, ethnicities, and races.…”
Section: Ucla Datasetmentioning
confidence: 99%
“…The input PPG signal for the avatars is generated through the convolution of a Gaussian window with the beat sequence, derived from a heart rate frequency range. In contrast to the intricate pipeline of SCAMPS, Wang et al (2022) presents a more user-friendly approach for generating synthetic videos. This method employs a statistical 3D head model, extracting facial features from publicly available in-the-wild face datasets [BUPT-Balancedface ( Wang et al, 2019 )], while the PPG waveforms are recorded from real human subjects.…”
Section: Related Workmentioning
confidence: 99%
“…Synthetic data present the possibility to generate data without the expensive costs, and it has the advantage of controlling the properties of the dataset, which can help create the less biased models. [ 110 ] Later, Wang et al [ 111 ] introduced a scalable physics‐based learning model to generate synthetic rPPG videos with diverse attributes such as skin color, and lighting conditions to improve the performance.…”
Section: Future Prospectsmentioning
confidence: 99%