2017
DOI: 10.1007/978-3-319-57021-1_8
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Affine-Invariant Shape-Appearance Handshape Features and Classification in Sign Language Videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 38 publications
0
8
0
Order By: Relevance
“…Deformable masks provided by active appearance models (AAMs) [8] can successfully help with detecting and tracking several types of informative events of a sign sequence, e.g., eye blinking, as done in [9]. AAMs [10] have also significantly boosted the performance of handshape recognition in sign language videos.…”
Section: Related Workmentioning
confidence: 99%
“…Deformable masks provided by active appearance models (AAMs) [8] can successfully help with detecting and tracking several types of informative events of a sign sequence, e.g., eye blinking, as done in [9]. AAMs [10] have also significantly boosted the performance of handshape recognition in sign language videos.…”
Section: Related Workmentioning
confidence: 99%
“…However, in the scope of this work, hand shape recognition is seen as a classification task of a specific number of defined hand shapes. Known approaches fall into three categories: (i) template matching against a large data set of often synthetic gallery images [25] or contour shapes [1,3]; (ii) generative model fitting approaches [35,10,28]; and (iii) discriminative modelling approaches such as Cooper et al [6]. Cooper uses random forests trained on HOG features to distinguish 12 hand shapes, each trained on 1000 training samples.…”
Section: State-of-the-artmentioning
confidence: 99%
“…Three papers proposed novel methods within the area of sign language recognition (Cooper et al, 2012;Nayak et al, 2012;Roussos et al, 2013). (Cooper et al, 2012) describe a method for sign language recognition using linguistic subunits that are learned automatically by the system.…”
Section: Summary Of Special Topic Papers Not Related To the Challengesmentioning
confidence: 99%
“…Another benefit is that the method identifies the aspects of a sign that are least affected by movement epenthesis, i.e., by signs immediately preceding or following the sign in question. (Roussos et al, 2013) present a method for classifying handshapes for the purpose of sign language recognition. Cropped hand images are converted to a normalized representation called "shape-appearance images", based on a PCA decomposition of skin pixel colors.…”
Section: Summary Of Special Topic Papers Not Related To the Challengesmentioning
confidence: 99%