2021 IEEE International Joint Conference on Biometrics (IJCB) 2021
DOI: 10.1109/ijcb52358.2021.9484395
|View full text |Cite
|
Sign up to set email alerts
|

Structure Destruction and Content Combination for Face Anti-Spoofing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 43 publications
(6 citation statements)
references
References 24 publications
0
6
0
Order By: Relevance
“…In the tables, we present our method under the name of DeepPixBis + DPS and CDCN + DPS, where DPS stands for deep patch-wise supervision. For intra-data set experiments, we also present the results of different state-of-the-art methods such as DC-CDN [31], CDCN-PS [25], and DCN [34]. Table 1 presents the obtained results on the Replay Mobile data set, 'grandtest' protocol.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In the tables, we present our method under the name of DeepPixBis + DPS and CDCN + DPS, where DPS stands for deep patch-wise supervision. For intra-data set experiments, we also present the results of different state-of-the-art methods such as DC-CDN [31], CDCN-PS [25], and DCN [34]. Table 1 presents the obtained results on the Replay Mobile data set, 'grandtest' protocol.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…In Ref. [34], authors propose a Destruction and Combination Network (DCN), which utilises their structure destruction, content combination, and local relation modelling modules. The structure destruction module is similar to jigsaw puzzle pretext tasks proposed in Ref.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, Li et al (2018a) used the MMD distance to make features unrelated with domains. Jia et al (2020) and Yang et al (2021) introduced triplet loss (Li et al 2019a,b) and Zhang et al (2021b) even constructed a similarity matrix to constrain the distance between features. Also, many meta-learning-based methods were exploited to find a generalized space among multiple domains (Shao et al 2020;Qin et al 2020;Kim and Lee 2021;Chen et al 2021b;Wang et al 2021;Qin et al 2021).…”
Section: Related Workmentioning
confidence: 99%
“…We evaluate various representative algorithms on Protocol 1 and Protocol 2, including classification supervision (i.e., ResNet-50 [25], PatchNet [64], MaxVit [59]), auxiliary pixel-wise supervision (i.e., CDCN++ [80], CDCN++binarymask [75], DCN [81], DC-CDN [78]) and generative pixel-wise supervision (i.e., LGSC [19]). For a fair comparison, we do not use pre-trained models, and other parameters are reproduced according to the original works.…”
Section: Baselinesmentioning
confidence: 99%
“…Classification supervisions are easy to construct, enabling deep FAS models to converge rapidly. In contrast, pixel-wise supervision with auxiliary tasks can ex-tract more fine-grained cues, with additional information such as pseudo depth maps [65,78,80,82], binary mask maps [43,66,74], and reflection maps [30,81,84] helping to delineate local live/spoof features. Generative pixel-wise supervisions [19,42,44,54,56], which do not rely on expertdesigned guidance and offer more flexible labels, visualize spoof cues in spoof samples, thereby enhancing the interpretability of FAS tasks.…”
Section: Introductionmentioning
confidence: 99%