1992
DOI: 10.1016/0141-5425(92)90088-3
|View full text |Cite
|
Sign up to set email alerts
|

Image processing system for interpreting motion in American Sign Language

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0
1

Year Published

1999
1999
2006
2006

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 66 publications
(22 citation statements)
references
References 6 publications
0
21
0
1
Order By: Relevance
“…where R is a 3×3 rotation matrix, The receiver outputs the data of Eulerian angles, namely, α , β ,γ , the angles of rotation about 1 X , 2 X and 3 X axes. Normally these data cannot be used directly as the features because inconsistent reference might exist since the position of the transmitter might be changed between the processing of training and that of testing.…”
Section: Feature Extractionmentioning
confidence: 99%
See 1 more Smart Citation
“…where R is a 3×3 rotation matrix, The receiver outputs the data of Eulerian angles, namely, α , β ,γ , the angles of rotation about 1 X , 2 X and 3 X axes. Normally these data cannot be used directly as the features because inconsistent reference might exist since the position of the transmitter might be changed between the processing of training and that of testing.…”
Section: Feature Extractionmentioning
confidence: 99%
“…Attempts to automatically recognize sign language began to appear in the 90's. Charaphayan and Marble [1] investigated a way using image processing to understand American Sign Language (ASL). This system can recognize correctly 27 of the 31 ASL symbols.…”
Section: Introductionmentioning
confidence: 99%
“…For shape, Freeman et al [24] used x-y image moments and orientation histograms and Hunter et al [38] used rotationally invariant Zernike moments. Others [16,20,77,79] considered the motion trajectories of the hand centroids. Quek [66] proposed using shape and motion features alternatively for the interpretation of hand gestures.…”
Section: -D Approaches Without Explicit Shape Modelsmentioning
confidence: 99%
“…For the above systems, action classification is based on hardcoded decision trees [16,20,79], nearest neighbor criteria [38,65], or on general pattern matching techniques for time-varying data, as described in Section 6. Some additional constraints on actions can be imposed using a dialogue structure where the current state limits the possible actions that can be expected next.…”
Section: -D Approaches Without Explicit Shape Modelsmentioning
confidence: 99%
“…Charayaphan and Marble [11] demonstrate a feature set that distinguishes between the 31 isolated ASL signs in their training set (which also acts as the test set). More recently, Cui and Weng [12] have shown an image-based system with 96% accuracy on 28 isolated gestures.…”
Section: Machine Sign Language Recognitionmentioning
confidence: 99%