2021
DOI: 10.11591/ijeecs.v24.i1.pp178-188
|View full text |Cite
|
Sign up to set email alerts
|

Static hand gesture recognition of Arabic sign language by using deep CNNs

Abstract: An Arabic sign language recognition using two concatenated deep convolution neural network models DenseNet121 & VGG16 is presented. The pre-trained models are fed with images, and then the system can automatically recognize the Arabic sign language. To evaluate the performance of concatenated two models in the Arabic sign language recognition, the red-green-blue (RGB) images for various static signs are collected in a dataset. The dataset comprises 220,000 images for 44 categories: 32 letters, 11 numbe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 24 publications
0
5
0
1
Order By: Relevance
“…In [55], the authors evaluated classifcation algorithms used in recognition, such as traditional machine learning and deep learning, and discussed previous work on diferentiating between static alphabetic and dynamic sign languages for Arabic and non-Arabic sign languages. In [56], the authors developed a fully automated method for recognizing 28 Arabic signs for letters and numbers.…”
Section: Related Workmentioning
confidence: 99%
“…In [55], the authors evaluated classifcation algorithms used in recognition, such as traditional machine learning and deep learning, and discussed previous work on diferentiating between static alphabetic and dynamic sign languages for Arabic and non-Arabic sign languages. In [56], the authors developed a fully automated method for recognizing 28 Arabic signs for letters and numbers.…”
Section: Related Workmentioning
confidence: 99%
“…Then, agent position is altered until it reaches an effectual location (Y * , Z * ) and changes the location of X and L. Location updating is functioned by the Eq. (10) where p < 1, indicates that an individual is allowable only to move in random way notwithstanding of angle location. Therefore, Eqs.…”
Section: Propagation Through a Leader's Positionmentioning
confidence: 99%
“…The vision-related techniques largely aim at the captured gestures image and receive the primary feature for identifying them. This technique was implied in several tasks, which include semantic segmentation, superresolution, multimedia systems, and emotion recognition and image classification [10].…”
Section: Introductionmentioning
confidence: 99%
“…First object detection is employed to ROBOSOCCER model for detecting the ball/nao and its position in the first frame. Then the next frame is given to the model and the new position is got [36]. Now with the old measurement, the new measurement is updated.…”
Section: Tracking Algorithmmentioning
confidence: 99%