IEEE 10th INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING PROCEEDINGS 2010
DOI: 10.1109/icosp.2010.5655761
|View full text |Cite
|
Sign up to set email alerts
|

Correlation analysis of facial features and sign gestures

Abstract: Abstract-In this paper we focus on the potential correlation of the manual and the non-manual component of sign language. This information is useful for sign language analysis, recognition and synthesis. We are mainly concerned with the application for sign synthesis. First we extracted features that represent the manual and non-manual component. We present a simple but robust method for the hand tracking to obtain a 2D trajectory representing a portion of the manual component. The head is tracked via Active A… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 5 publications
(7 reference statements)
0
3
0
Order By: Relevance
“…The correlation between manual and non-manual gestures of sign language has been studied by Krnoul et al [30]. This study was conducted on Czech sign language and the findings of this study showed that hand and head gestures are correlated mainly in signs with vertical movement of the head and hands.…”
Section: Sign Language Recognition Systemsmentioning
confidence: 75%
“…The correlation between manual and non-manual gestures of sign language has been studied by Krnoul et al [30]. This study was conducted on Czech sign language and the findings of this study showed that hand and head gestures are correlated mainly in signs with vertical movement of the head and hands.…”
Section: Sign Language Recognition Systemsmentioning
confidence: 75%
“…In continuous sign language databases, more than one sign word is performed constantly. The vertical movement of the head and hands is correlated in manual and non-manual sign language recognition [ 24 ]. Similar manual but different non-manual sign video recognition is investigated in the study in [ 25 ].…”
Section: Literature Reviewmentioning
confidence: 99%
“…SiGML relies on HamNoSys as the underlying representation for manuals (Hanke, 2004), but introduces a set of facial nonmanual specifications, including head orientation, eye gaze, brows, eyelids, nose, and mouth and its implementation uses the maskable morphing approach for synthesis. However, there is no consensus on how best to specify facial nonmanual signals, particularly for the mouth, and other research groups have either developed their own custom specification (Lombardo, Battaglino, Damiaro and Nunnari, 2011) or are using an earlier annotation system such as Signwriting (Krnoul, 2010). Further, none of these efforts have yet specified an approach to generating co-occurring facial nonmanual signals.…”
Section: Related Workmentioning
confidence: 99%