2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR) 2016
DOI: 10.1109/icfhr.2016.0028
|View full text |Cite
|
Sign up to set email alerts
|

Deep Knowledge Training and Heterogeneous CNN for Handwritten Chinese Text Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(28 citation statements)
references
References 12 publications
0
28
0
Order By: Relevance
“…In Table 3, we provide the comparison between ACE loss and previous methods. It is evident that the proposed ACE loss function exhibits higher performance than previous methods, including MDLSTM-based models [34,47], HMM-based model [10], and over-segmentation methods [27,44,45,48] with and without language model (LM). Compared to scene text recognition, handwritten Chinese text recognition problem possesses its unique challenges, such as large character set (7357 classes) and charactertouching problem.…”
Section: Resultsmentioning
confidence: 95%
“…In Table 3, we provide the comparison between ACE loss and previous methods. It is evident that the proposed ACE loss function exhibits higher performance than previous methods, including MDLSTM-based models [34,47], HMM-based model [10], and over-segmentation methods [27,44,45,48] with and without language model (LM). Compared to scene text recognition, handwritten Chinese text recognition problem possesses its unique challenges, such as large character set (7357 classes) and charactertouching problem.…”
Section: Resultsmentioning
confidence: 95%
“…Although the confusion among the 7360 classes is higher, Table IX shows an overall comparison of our proposed method and other state-of-the-art methods without/with a language model on the ICDAR 2013 competition set. we list the state-of-theart oversegmentation method heterogeneous CNN [7], CNNs-RNNLM [8] and the segmentationfree method SMDLSTM-CTC [15], CNN-ACE [16] in Table IX for comparison. With the same configuration of vocabulary size (4 more garbage classes adopted in our HMM system), the proposed WCNN-PHMM yielded the best performance whether a language model was employed or not.…”
Section: ) Visualization Analysis For Writer Codementioning
confidence: 99%
“…In general, the research efforts for offline HCTR can be divided into two categories: oversegmentationbased approaches and segmentation-free approaches. The former approaches [5], [6], [7], [8] often build several modules by first including character oversegmentation, character classification, and modeling the linguistic and geometric contexts, and then incorporating them to calculate the score for path search. The recent work in [8], with the neural network language model, adopted three different CNN models to replace the conventional character classifier, segmentation and geometric models to achieve the best performance of oversegmentation-based methods on the ICDAR 2013 competition dataset [9].…”
Section: Introductionmentioning
confidence: 99%
“…where r is the iteration index and 0 < ρ < 1 is a scalar that gradually increases at rate ρ rate ≥ 1. Closed-form solutions for (7) and (8) can be obtained by taking the derivatives of every subdictionary and equating Algorithm 1: Incoherent dictionary pair learning (InDPL). Input: X 1 , X 2 , .…”
Section: Incoherent Dictionary Pair Learning (Indpl)mentioning
confidence: 99%