2017
DOI: 10.1117/1.jei.26.4.043007
|View full text |Cite
|
Sign up to set email alerts
|

Face antispoofing based on frame difference and multilevel representation

Abstract: Mourad Oussalah, "Face antispoofing based on frame difference and multilevel representation,"Abstract. Due to advances in technology, today's biometric systems become vulnerable to spoof attacks made by fake faces. These attacks occur when an intruder attempts to fool an established face-based recognition system by presenting a fake face (e.g., print photo or replay attacks) in front of the camera instead of the intruder's genuine face. For this purpose, face antispoofing has become a hot topic in face analysi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 41 publications
0
7
0
Order By: Relevance
“…Despite the fact that our EER and HTER are similar to the previous approaches. The following are the types of attacks on replay databases that we compute the performance of using our method: Depending on the method used to hold the 11.78 11.79 LBP-TOP [11] 07.90 07.60 IDA [13] 08.58 07.41 Motion+LBP [49] 04.50 05.11 FD-ML-LPQ-Fisher [6] 05.62 04.80 DMD [12] 05.30 03.75 Colour-LBP [15] 00.40 02.90 Spectral cubes [41] -02.75 CNN [20] 06.10 02.10 USDAN-Norm [22] -00.30 Bottleneck Feature Fusion + NN [23] 00.83 00.00 Identity-DS [21] 00.20 00.00 S-CNN+PL+TC [46] 0.36 -BS-CNN+MV (our) 00.58 00.62 attack replay device (paper, mobile phone, or tablet), the three attack subsets (print, mobile, and highdef) were recorded in two different modes: i) fixed-support and ii) hand-based (See Tables 3). We also put our method to the test using the MSU-MFSD database.…”
Section: Performance Comparison On Intra-databasementioning
confidence: 99%
See 1 more Smart Citation
“…Despite the fact that our EER and HTER are similar to the previous approaches. The following are the types of attacks on replay databases that we compute the performance of using our method: Depending on the method used to hold the 11.78 11.79 LBP-TOP [11] 07.90 07.60 IDA [13] 08.58 07.41 Motion+LBP [49] 04.50 05.11 FD-ML-LPQ-Fisher [6] 05.62 04.80 DMD [12] 05.30 03.75 Colour-LBP [15] 00.40 02.90 Spectral cubes [41] -02.75 CNN [20] 06.10 02.10 USDAN-Norm [22] -00.30 Bottleneck Feature Fusion + NN [23] 00.83 00.00 Identity-DS [21] 00.20 00.00 S-CNN+PL+TC [46] 0.36 -BS-CNN+MV (our) 00.58 00.62 attack replay device (paper, mobile phone, or tablet), the three attack subsets (print, mobile, and highdef) were recorded in two different modes: i) fixed-support and ii) hand-based (See Tables 3). We also put our method to the test using the MSU-MFSD database.…”
Section: Performance Comparison On Intra-databasementioning
confidence: 99%
“…Inspired by the work of Frame Difference and Multilevel Representation (FDML) [6], we propose effective biometrics systems based on the detection of face spoofing. To do this, we suggest using the background substruction method in the preprocessing step to adjust the face's motion.…”
Section: Introductionmentioning
confidence: 99%
“…Benlamoudi etal. [2] combine three popular texture descriptors, BSIF, LBP, and LPQ. Thippeswamy,G et al [38] created an ensemble of Local appearance based models based on LBP, GDP, GLTP, LDiP, LGBPHS and LPQ descriptors to detect 2D presentation attacks.…”
Section: Related Workmentioning
confidence: 99%
“…Due to the domain shift (different imaging environments) between databases, the performaence of all the anti-spoofing methods drops. Compared with the state-of-the-art methods, [60] 50.2% 47.9% LBP [56] 55.9% 57.6% LBP-TOP [61] 49.7% 60.6% Motion-Mag [64] 50.1% 47.0% Spectral cubes [22] 34.4% 45.5% CNN [14] 48.5% 39.6% Color-LBP [10] 47.0% 39.6% Colour Texture [8] 30.3% 37.7% Depth + rPPG [63] 27.6% 28.4% Deep-Learning [13] 48.2% 45.4% KSA [65] 33.1% 32.1% Frame difference [66] 50 our method (MobileNet + Attention) achieves the 2nd best performance (30.0% and 33.4%), slightly worse than the best one [63] (27.6% and 28.4%). However, [63] uses more auxiliary information (3D face shape, rPPG signals) than our method.…”
Section: G Cross-database Comparisonsmentioning
confidence: 99%