2020
DOI: 10.1007/s11042-020-09385-5
|View full text |Cite
|
Sign up to set email alerts
|

KeyFrame extraction based on face quality measurement and convolutional neural network for efficient face recognition in videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 59 publications
0
6
0
Order By: Relevance
“…To avoid processing a whole video, keyframe extraction methods for face recognition in videos have been developed. Abed et al [ 5 ] propose a method based on face quality and deep learning. The first step is face detection using the MTCNN detector, which detects five landmarks (the eyes, the two corners of the mouth, and the nose) and then limits face boundaries to a bounding box and from there provides a confidence score.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…To avoid processing a whole video, keyframe extraction methods for face recognition in videos have been developed. Abed et al [ 5 ] propose a method based on face quality and deep learning. The first step is face detection using the MTCNN detector, which detects five landmarks (the eyes, the two corners of the mouth, and the nose) and then limits face boundaries to a bounding box and from there provides a confidence score.…”
Section: Related Workmentioning
confidence: 99%
“…The issue with some existing methods of face image quality computation is their dependence on subjective or indirect measures of quality, which may not necessarily align with the needs of face recognition systems. In contrast, these newer methods, as exemplified by the works of Abed et al [ 5 ] and Bahroun et al [ 7 ], provide a more direct measure of face image quality, which is closely tied to the performance of the face recognition model itself.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This article employs the correlation coefficient similarity criterion to compare the estimated feature vectors with the feature vectors of the images in the dataset. The correlation coefficient is calculated using the following formula: 𝐧 𝐦 (9) where X is the estimated feature vector and Y is the feature vector of the image in the dataset. Also, X ̅ is the mean of X, and Y ̅ is the mean of Y.…”
Section: Face Recognitionmentioning
confidence: 99%
“…In other words, as the angle of the face to the camera increases, the accuracy of face recognition methods *Corresponding Author Email: h.hassanpour@shahroodut.ac.ir (H. Hassanpour) decreases. Recently, feature fusion technique [9,10] has improved the performance of facial recognition systems to some extent. In a facial recognition system, the fusion of information can be done at the decision level or at the feature level [11].…”
Section: Introductionmentioning
confidence: 99%