2012
DOI: 10.7763/ijmlc.2012.v2.229
|View full text |Cite
|
Sign up to set email alerts
|

Sign Language Recognition Using Motion History Volume and Hybrid Neural Networks

Abstract: Abstract-In this paper, we present a sign language recognition model which does not use any wearable devices for object tracking. The system design issues and implementation issues such as data representation, feature extraction and pattern classification methods are discussed. The proposed data representation method for sign language patterns is robust for spatio-temporal variances of feature points. We present a feature extraction technique which can improve the computation speed by reducing the amount of fe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 9 publications
0
2
0
Order By: Relevance
“…Based on the results obtained in this paper and previous studies related to BISINDO recognition, we decided to use AF-DTW model for our next research in recognizing sentence of BISINDO. However, some deep learning algorithms can be tried to improve the accuracy of BISINDO recognition, as in [19], [20]. In fact, the sign language is used by the deaf or hard-hearing people to communicate in the form of sentence, not a word.…”
Section: Discussionmentioning
confidence: 99%
“…Based on the results obtained in this paper and previous studies related to BISINDO recognition, we decided to use AF-DTW model for our next research in recognizing sentence of BISINDO. However, some deep learning algorithms can be tried to improve the accuracy of BISINDO recognition, as in [19], [20]. In fact, the sign language is used by the deaf or hard-hearing people to communicate in the form of sentence, not a word.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, each sub-layer generates a feature map which reflects successively larger ranges of the preceding unit. In our previous study [7], we introduced an extended version of the CNN for temporal feature extraction. The input data for the feature extractor are represented as a spatiotemporal volume which is described in the previous section.…”
Section: Feature Extraction and Recognition Modulementioning
confidence: 99%