2020
DOI: 10.1109/jbhi.2020.2964520
|View full text |Cite
|
Sign up to set email alerts
|

Facial Weakness Analysis and Quantification of Static Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(13 citation statements)
references
References 28 publications
0
13
0
Order By: Relevance
“…Real-World and Public Datasets Efforts have also been made to construct datasets from real-world scenes and make them available for sharing. Recently, Zhuang et al [ 65 ] built a “in-the-wild” static image dataset of facial weakness from YouTube, Google Image, and other public repositories. They combined landmarks and intensity features to detect pathological facial asymmetry, which yielded considerable accuracy.…”
Section: The Facial Recognition System: Applications and Advantagesmentioning
confidence: 99%
See 1 more Smart Citation
“…Real-World and Public Datasets Efforts have also been made to construct datasets from real-world scenes and make them available for sharing. Recently, Zhuang et al [ 65 ] built a “in-the-wild” static image dataset of facial weakness from YouTube, Google Image, and other public repositories. They combined landmarks and intensity features to detect pathological facial asymmetry, which yielded considerable accuracy.…”
Section: The Facial Recognition System: Applications and Advantagesmentioning
confidence: 99%
“…In a multicenter cross-sectional study of 5796 patients, this method achieved sensitivity of 0.80, specificity of 0.54, and AUC of 0.730. Zhuang et al [ 65 ] has also built a model to identify the asymmetric face of stroke. These studies represented the potential of an automated facial video- or image-based assessing system to detect acute and severe diseases.…”
Section: The Facial Recognition System: Applications and Advantagesmentioning
confidence: 99%
“…After preprocessing the video, the HoG features are extracted for each individual frame. We prefer the HoG features over landmarks features, which are commonly done for vision based facial weakness analysis, due to the fact that landmark-based methods can suffer from inaccuracies in face landmarks localization (17,18), while the HoG features are able to handle local misalignment and capture the detailed gradient features exhibited by facial weakness (15). Since HoG features are high-dimensional, to increase computation efficiency and avoid overfitting, the principal component coefficients are computed from the training dataset to reduce the dimensions of the HoG features to the components that can cover 95% of the variance.…”
Section: Computer Vision Algorithmmentioning
confidence: 99%
“…Finally, a voting classifier aggregates the individual classification results and outputs discrete classification results: normal, left facial weakness, and right facial weakness (Figure 1). In addition, an ensemble of regression trees based facial landmark extractor (20) is used in our study because of its accurate and robust performance (18). The configurations for HoG features are set as follows: the number of orientation bins in a cell is nine, a cell consists of eight by eight pixels, and each block contains four cells in each block is four.…”
Section: Computer Vision Algorithmmentioning
confidence: 99%
“…This watermarking based method was designed to certify that biometric data had originated from a genuine sensor. Zhuang et al [13] discuss the experimentation and evaluation of some of the existing feature extraction methods that are used to measure facial weaknesses. Since an open source annotated facial weakness images dataset was unavailable, experimentation involved first creating a facial weakness dataset using images and videos from public repositories like Google Images and YouTube.…”
Section: Sharing Datasets Attributes and Provenancementioning
confidence: 99%