2017
DOI: 10.48550/arxiv.1710.06836
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Using Deep Convolutional Networks for Gesture Recognition in American Sign Language

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
12
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(13 citation statements)
references
References 0 publications
1
12
0
Order By: Relevance
“…At this end of the curve, the observed performance was comparable/similar to that reported in e.g. [7] (see also references therein). The numbers of errors per each gesture in the trained system are shown in Table 2.…”
Section: Experiments and Resultssupporting
confidence: 88%
“…At this end of the curve, the observed performance was comparable/similar to that reported in e.g. [7] (see also references therein). The numbers of errors per each gesture in the trained system are shown in Table 2.…”
Section: Experiments and Resultssupporting
confidence: 88%
“…With this approach, Pigou et al [8] achieved an accuracy of 91.7%. Bheda et al [10] used a deep convolutional network to classify ASL with alphabets and digits. However, unlike the previous two approaches, this work used cascaded CNN rather than the traditional CNN.…”
Section: Custom Cnn Modelsmentioning
confidence: 99%
“…CNNs have also been applied in order to recognize sign language in a single frame [39] or a sequence of frames [3] (dynamic gestures). In [40], the CNN takes both intensity and depth video sequences as input for the recognition of dynamic gestures with the objective of designing touchless interfaces in cars.…”
Section: Related Workmentioning
confidence: 99%