Procedings of the British Machine Vision Conference 2011 2011
DOI: 10.5244/c.25.124
|View full text |Cite
|
Sign up to set email alerts
|

A Framework for the Recognition of Nonmanual Markers in Segmented Sequences of American Sign Language

Abstract: Despite the fact that there is critical grammatical information expressed through facial expressions and head gestures, most research in the field of sign language recognition has primarily focused on the manual component of signing. We propose a novel framework for robust tracking and analysis of non-manual behaviours, with an application to sign language recognition. The novelty of our method is threefold. First, we propose a dynamic feature representation. Instead of using only the features available in the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“…-conditional sentences-they contain subordinate sentences that express the conditions of implementing proposals included in superordinate clauses. In sign languages, subordinate sentences are formed by raised eyebrows, wide eyes, head forward (or back) and tilted to the side, followed by a pause after which the eyebrows and head return to neutral position [39,40,50].…”
Section: Role Of Non-manual Signals In Modifying Some Sign Wordsmentioning
confidence: 99%
“…-conditional sentences-they contain subordinate sentences that express the conditions of implementing proposals included in superordinate clauses. In sign languages, subordinate sentences are formed by raised eyebrows, wide eyes, head forward (or back) and tilted to the side, followed by a pause after which the eyebrows and head return to neutral position [39,40,50].…”
Section: Role Of Non-manual Signals In Modifying Some Sign Wordsmentioning
confidence: 99%
“…Instances where the mouth shapes are derived from the spoken language forms of the respective signs are known as speech-like mouthings [52]. To this end, research went further than classifying only hand shapes, to include mouth shapes [53,54], whole-face [55,56] and whole-body configurations [57,58], all in the context of sign language recognition and translation. The broader field of understanding sign language includes analyses of eyebrow position during signing to provide context and signer intent [59], further illustrating the importance of the holistic approach.…”
Section: Modellingmentioning
confidence: 99%