2023
DOI: 10.1109/access.2023.3249787
|View full text |Cite
|
Sign up to set email alerts
|

Handwritten Gujarati Numerals Classification Based on Deep Convolution Neural Networks Using Transfer Learning Scenarios

Abstract: In recent years, handwritten numeral classification has achieved remarkable attention in the field of computer vision. Handwritten numbers are difficult to recognize due to the different writing styles of individuals. In a multilingual country like India, negligible research attempts have been carried out for handwritten Gujarati numerals recognition using deep learning techniques compared to the other regional scripts. The Gujarati digit dataset is not available publicly and deep learning requires a large amo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 39 publications
0
9
0
Order By: Relevance
“…The study [2] explores the categorization of handwritten Gujarati numerals ranging from zero to nine using deep transfer learning techniques. The investigation utilized ten preexisting CNN architectures, namely LeNet, VGG16, InceptionV3, ResNet50, Xception, ResNet101, MobileNet, MobileNetV2, DenseNet169, and EfficientNetV2S, to identify the most suitable model through the fine-tuning of weight parameters.…”
Section: Transfer Learning Approachmentioning
confidence: 99%
See 3 more Smart Citations
“…The study [2] explores the categorization of handwritten Gujarati numerals ranging from zero to nine using deep transfer learning techniques. The investigation utilized ten preexisting CNN architectures, namely LeNet, VGG16, InceptionV3, ResNet50, Xception, ResNet101, MobileNet, MobileNetV2, DenseNet169, and EfficientNetV2S, to identify the most suitable model through the fine-tuning of weight parameters.…”
Section: Transfer Learning Approachmentioning
confidence: 99%
“…(1) π‘…π‘’π‘π‘Žπ‘™π‘™ = π‘‡π‘Ÿπ‘’π‘’ π‘ƒπ‘œπ‘ π‘–π‘‘π‘–π‘£π‘’π‘  (𝑇𝑃) π‘‡π‘Ÿπ‘’π‘’ π‘ƒπ‘œπ‘ π‘–π‘‘π‘–π‘£π‘’π‘  (𝑇𝑃) + πΉπ‘Žπ‘™π‘ π‘’ 𝑁𝑒𝑔𝑒𝑑𝑖𝑣𝑒𝑠(𝐹𝑃) (2) Recall is the proportion of relevant instances that have been correctly identified. It quantifies the percentage of true positives that were accurately recognized.…”
Section: π‘ƒπ‘’π‘Ÿπ‘π‘–π‘ π‘–π‘œπ‘› = π‘‡π‘Ÿπ‘’π‘’ π‘ƒπ‘œπ‘ π‘–π‘‘π‘–π‘£π‘’π‘  (𝑇𝑃) π‘‡π‘Ÿπ‘’π‘’ π‘ƒπ‘œπ‘ π‘–π‘‘π‘–π‘£π‘’π‘  (𝑇𝑃) + πΉπ‘Žπ‘™π‘ π‘’ ...mentioning
confidence: 99%
See 2 more Smart Citations
“…In existing exploratory research on transfer learning, significant attention has been given to the following directions: 1) the degree of fine-tuning of pre-trained models, 2) few-shot transfer learning, and 3) zero-shot transfer learning. To address the question of how much fine-tuning should be applied to the pre-trained convolutional neural network when training the target model, Goel et al [25] employed three transfer learning strategies to demonstrate the impact of different finetuning strategies on model performance. They discovered that connecting two newly adapted fully connected layers to the final convolutional layer of the pre-trained model and fine-tuning only the fully connected layers improved model performance and significantly reduced training time.…”
Section: B Transfer Learning In Character Recognitionmentioning
confidence: 99%