Proceedings of the 2018 International Conference on Control and Computer Vision - ICCCV '18 2018
DOI: 10.1145/3232651.3232657
|View full text |Cite
|
Sign up to set email alerts
|

Hand Shape Recognition Using Very Deep Convolutional Neural Networks

Abstract: This work examines the application of modern deep convolutional neural network architectures for classification tasks in the sign language domain. Transfer learning is performed by pre-training the models on the ImageNet dataset. After fine-tuning on the ASL fingerspelling and the 1 Million Hands datasets the models outperform state-of-the-art approaches on both hand shape classification tasks. Introspection of the trained models using Saliency Maps is also performed to analyze how the networks make their deci… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…Table 4 lists the average classification accuracy among the five subjects obtained from LOOCV evaluation. Some state-of-the-art methods use handcrafted features based on prior knowledge [2-7, 9, 11, 14, 16, 19, 22, 24] while others automatically learn discriminative descriptors [21,27,28,32,35,36]. The proposed DDaNet outperforms the other methods in terms of accuracy (93.53%), demonstrating the benefits of learning discriminative features related to letter signs through a deep neural network with an attention module.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 93%
See 3 more Smart Citations
“…Table 4 lists the average classification accuracy among the five subjects obtained from LOOCV evaluation. Some state-of-the-art methods use handcrafted features based on prior knowledge [2-7, 9, 11, 14, 16, 19, 22, 24] while others automatically learn discriminative descriptors [21,27,28,32,35,36]. The proposed DDaNet outperforms the other methods in terms of accuracy (93.53%), demonstrating the benefits of learning discriminative features related to letter signs through a deep neural network with an attention module.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 93%
“…When compared to the deep-learning-based methods that jointly extract features and classify the letter signs [7,21,32,36], the proposed DDaNet ach ieves the highest precision (94.10%), recall (93.48%), and F score (93.26%). Note that only four methods provide results of precision, recall, and F score.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…1. Architectural comparison between existing (a) [1], [2] -(b) [3], [4] two stage network and proposed one for all: an end-to-end solution for HGR. The visual representation implies the proposed frameworks efficacy to handle the all kind of challenges: complex signs, illumination variations, complex and cluttered backgrounds.…”
Section: Introductionmentioning
confidence: 99%