2019 Novel Intelligent and Leading Emerging Sciences Conference (NILES) 2019
DOI: 10.1109/niles.2019.8909324
|View full text |Cite
|
Sign up to set email alerts
|

Smart Gesture-based Control in Human Computer Interaction Applications for Special-need People

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 21 publications
0
8
0
Order By: Relevance
“…Table 4 lists the average classification accuracy among the five subjects obtained from LOOCV evaluation. Some state-of-the-art methods use handcrafted features based on prior knowledge [2-7, 9, 11, 14, 16, 19, 22, 24] while others automatically learn discriminative descriptors [21,27,28,32,35,36]. The proposed DDaNet outperforms the other methods in terms of accuracy (93.53%), demonstrating the benefits of learning discriminative features related to letter signs through a deep neural network with an attention module.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 93%
See 3 more Smart Citations
“…Table 4 lists the average classification accuracy among the five subjects obtained from LOOCV evaluation. Some state-of-the-art methods use handcrafted features based on prior knowledge [2-7, 9, 11, 14, 16, 19, 22, 24] while others automatically learn discriminative descriptors [21,27,28,32,35,36]. The proposed DDaNet outperforms the other methods in terms of accuracy (93.53%), demonstrating the benefits of learning discriminative features related to letter signs through a deep neural network with an attention module.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 93%
“…Deep learning techniques handle this limitation by automatically learning discriminative features from a large dataset containing scene variations. Table 1 shows that most studies have introduced convolutional neural networks (CNNs) to jointly learn hand descriptors and letter sign classifiers [21,28,32,35,36]. In some studies, deep neural networks have been designed for specific purposes, such as feature extraction [27] and classification [7].…”
Section: B Network For Fingerspelling Recognitionmentioning
confidence: 99%
See 2 more Smart Citations
“…N S (Sreekanth& Narayanan, 2017) proposed convex hull algorithm for American Sign Language (ASL) with different digits from 0-9 and obtained an accuracy of 89% to 98%. (Rady et al, 2019) proposed enhanced automatic model for hand gesture recognition using CNN method. They used both depth and color information with Kinect sensor and applied to three different datasets and obtained an accuracy of 84.67%, 99.5% and 99.85%.…”
Section: E Robot Controlmentioning
confidence: 99%