2022
DOI: 10.1016/j.cmpbup.2021.100048
|View full text |Cite
|
Sign up to set email alerts
|

DeepASLR: A CNN based human computer interface for American Sign Language recognition for hearing-impaired individuals

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(19 citation statements)
references
References 6 publications
0
19
0
Order By: Relevance
“…The result shows that the system has a recognition rate of 97%. In [ 36 ], the authors created a dataset and a CNN sign language recognition system to interpret the American sign gesture alphabet and translate it to our natural language. Three datasets were used to compare the results and accuracy of each.…”
Section: Related Workmentioning
confidence: 99%
“…The result shows that the system has a recognition rate of 97%. In [ 36 ], the authors created a dataset and a CNN sign language recognition system to interpret the American sign gesture alphabet and translate it to our natural language. Three datasets were used to compare the results and accuracy of each.…”
Section: Related Workmentioning
confidence: 99%
“…The proposed system was developed to improve the information communication among people with hearing loss and speech impairment. In a recent study [29] a new dataset, called the American sign language alphabet (ASLA), was created by considering various conditions, such as lighting and distance, and letter finger images were classified using a CNN. The authors demonstrated by using the ASLA dataset, their system can achieve a higher performance than previous studies that used different datasets.…”
Section: A Study Using Deep Learningmentioning
confidence: 99%
“…Computer Vision is a field of Artificial Intelligence that focuses on problems related to images and videos. CNN [1][2][3][5] combined with Computer vision is capable of performing complex problems. To develop a practical and meaningful system that can able to understand sign language and translate that to the corresponding text.…”
Section: ) Sign Language Recognitionmentioning
confidence: 99%
“…From the above literature survey, the author uses different techniques to implement and develop the model which is based on a visionbased approach, sensors, MOPGRU ( Mediapipe optimized gated recurrent unit) [4], CNN(Convolutional Neural Networks) [1][2][3] and [5], which is used for image recognition and tasks that involve the processing of pixel data. LSTM is used to learn, process, and classify sequential data because these networks can learn long-term dependencies between time steps of data.…”
Section: ) Sign Language Recognition Via Skeleton-aware Multi-model E...mentioning
confidence: 99%