2020 IEEE Recent Advances in Intelligent Computational Systems (RAICS) 2020
DOI: 10.1109/raics51191.2020.9332480
|View full text |Cite
|
Sign up to set email alerts
|

Rethinking Generalization in American Sign Language Prediction for Edge Devices with Extremely Low Memory Footprint

Abstract: Due to the boom in technical compute in the last few years, the world has seen massive advances in artificially intelligent systems solving diverse real-world problems. But a major roadblock in the ubiquitous acceptance of these models is their enormous computational complexity and memory footprint. Hence efficient architectures and training techniques are required for deployment on extremely low resource inference endpoints. This paper proposes an architecture for detection of alphabets in American Sign Langu… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 29 publications
(7 citation statements)
references
References 13 publications
1
6
0
Order By: Relevance
“…Interpolation augmentation, as suggested in [22], was used in the proposed model, improving generalization, and corroborating the statement, "Interpolation Augmentation seems to improve generalization for resource-constrained endpoints."…”
Section: Discussionsupporting
confidence: 68%
See 1 more Smart Citation
“…Interpolation augmentation, as suggested in [22], was used in the proposed model, improving generalization, and corroborating the statement, "Interpolation Augmentation seems to improve generalization for resource-constrained endpoints."…”
Section: Discussionsupporting
confidence: 68%
“…The documentation of the board suggests keeping the model under 400KB, but during this study [22] we found that the biggest model which can be fit successfully in memory is under 230KB. A larger model of size upto 1MB can be stored on the flash memory but for that the model has to converted into a FlatBuffer using STM32Cube.AI and the operating system has to be recompiled, which leads to the loss of the utility of the MicroPython.…”
Section: Hardware Setup For Deploymentmentioning
confidence: 76%
“…From Table 4, it is clear that the proposed method, whether using a single model or a multi-model, is better than the models presented in the previous studies referred to in the table. [30] Transfer learning using MobileNetV2 on 29 classes 98.67% Sinha et al, 2019 [31] Custom CNN model with fully connected layer on 29 classes 96.03% Kadhim et al, 2020 [32] Transfer learning using VGG1 on 28 classes 98.65% Paul et al 2020 [33] Custom CNN model with fully connected layer on 24 classes 99.02% Mahmud et al, 2018 [34] HOG feature extraction & KNN classifier on 26 classes 94.23% Prasad 2018 [35] Image magnitude gradient for feature extraction on 24 classes 95.40% Phong &Ribeiro 2019 [36] Transfer learning on multiple architecture, etc on 29 classes 99.00% Ashiquzzaman et al, 2020 [37]…”
Section: Resultsmentioning
confidence: 99%
“…A DL model was introduced to detect Sign Language Alphabet on tiny edge devices, with an aim to enable deaf-mute to easily communicate with the community. Authors in [28] proposed a model to detect the American Sign language (ASL) Alphabet and transcribed to text and speech in real time on tiny wearable IoT devices. The device has the smallest and cheapest microcontrollers.…”
Section: Sign Language Detectionmentioning
confidence: 99%