2020 International Conference on Intelligent Systems and Computer Vision (ISCV) 2020
DOI: 10.1109/iscv49265.2020.9204214
|View full text |Cite
|
Sign up to set email alerts
|

Offline Arabic Handwriting Recognition Using Deep Learning: Comparative Study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 51 publications
0
5
0
Order By: Relevance
“…Suppose that the visible layer element of RBM is represented by v and the hidden layer element is represented by h . The formula for calculated the energy of RBM is shown in (19).…”
Section: Emotion Classification 1) Conditional Deep Belief Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…Suppose that the visible layer element of RBM is represented by v and the hidden layer element is represented by h . The formula for calculated the energy of RBM is shown in (19).…”
Section: Emotion Classification 1) Conditional Deep Belief Networkmentioning
confidence: 99%
“…Using SER to achieve public opinion supervision needs high-precision classification algorithm supports, due to the unique layer-by-layer training mechanism, DBN network has more powerful high-dimensional data representation and classification ability in many tasks [19]- [21]. However, the existing DBN-based SER research does not consider the temporal correlation between speech features.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Online systems use sensors to capture data during the writing process, while offline systems rely on images of the user's handwriting taken from a scanner or digital camera. Research suggests that online recognition has a higher recognition rate than offline mode [2], [3]. Many methods have been proposed for offline Arabic handwriting recognition to convert Arabic writings into a machine-readable format.…”
Section: Introductionmentioning
confidence: 99%
“…When using such structured data, a Convolutional Neural Network (CNN) is superior for learning features dependencies in wide datasets. In addition to that, CNN can also provide satisfactory performance with 1-dimentional data [4], [5], [6], [7], [8]. When working with time series data, Long Short-Term Memory (LSTM) algorithm can also learn easily temporal patterns and dependencies using memory cells and gates [9], [10], [11], [12], [13].…”
Section: Introductionmentioning
confidence: 99%