2014
DOI: 10.1186/1687-5281-2014-14
|View full text |Cite
|
Sign up to set email alerts
|

Classification of extreme facial events in sign language videos

Abstract: We propose a new approach for Extreme States Classification (ESC) on feature spaces of facial cues in sign language (SL) videos. The method is built upon Active Appearance Model (AAM) face tracking and feature extraction of global and local AAMs. ESC is applied on various facial cues -as, for instance, pose rotations, head movements and eye blinking -leading to the detection of extreme states such as left/right, up/down and open/closed. Given the importance of such facial events in SL analysis, we apply ESC to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 35 publications
0
5
0
Order By: Relevance
“…Instances where the mouth shapes are derived from the spoken language forms of the respective signs are known as speech-like mouthings [52]. To this end, research went further than classifying only hand shapes, to include mouth shapes [53,54], whole-face [55,56] and whole-body configurations [57,58], all in the context of sign language recognition and translation. The broader field of understanding sign language includes analyses of eyebrow position during signing to provide context and signer intent [59], further illustrating the importance of the holistic approach.…”
Section: Modellingmentioning
confidence: 99%
“…Instances where the mouth shapes are derived from the spoken language forms of the respective signs are known as speech-like mouthings [52]. To this end, research went further than classifying only hand shapes, to include mouth shapes [53,54], whole-face [55,56] and whole-body configurations [57,58], all in the context of sign language recognition and translation. The broader field of understanding sign language includes analyses of eyebrow position during signing to provide context and signer intent [59], further illustrating the importance of the holistic approach.…”
Section: Modellingmentioning
confidence: 99%
“…These expressions can be visually modeled by deformable models that encode both geometric shape and brightness texture information. Deformable masks provided by active appearance models (AAMs) [8] can successfully help with detecting and tracking several types of informative events of a sign sequence, e.g., eye blinking, as done in [9]. AAMs [10] have also significantly boosted the performance of handshape recognition in sign language videos.…”
Section: Related Workmentioning
confidence: 99%
“…These expressions can be visually modeled by deformable models that encode both geometric shape and brightness texture information. Deformable masks provided by active appearance models (AAMs) [8] can successfully help with detecting and tracking several types of informative events of a sign sequence, e.g., eye blinking, as done in [9]. AAMs [10] have also significantly boosted the performance of handshape recognition in sign language videos.…”
Section: Related Workmentioning
confidence: 99%