2006
DOI: 10.1109/tpami.2006.167
|View full text |Cite
|
Sign up to set email alerts
|

Maximization of mutual information for offline Thai handwriting recognition

Abstract: This paper aims to improve the performance of an HMM-based offline Thai handwriting recognition system through discriminative training and the use of fine-tuned feature extraction methods. The discriminative training is implemented by maximizing the mutual information between the data and their classes. The feature extraction is based on our proposed block-based PCA and composite images, shown to be better at discriminating Thai confusable characters. We demonstrate significant improvements in recognition accu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2009
2009
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 26 publications
(15 citation statements)
references
References 8 publications
0
15
0
Order By: Relevance
“…, T which are augmented by their spatial derivatives in horizontal direction ∆ = x t − x t−1 . Note that many systems divide the sliding window itself into several sub-windows and extract different features within each of the sub-windows [2,20,30,39] In order to incorporate temporal and spatial context into the features, we concatenate 7 consecutive features in a sliding window with maximum overlap, which are later reduced by a PCA transformation matrix to a feature vector x t of dimension 30 (see Figure 2). …”
Section: Feature Extractionmentioning
confidence: 99%
See 2 more Smart Citations
“…, T which are augmented by their spatial derivatives in horizontal direction ∆ = x t − x t−1 . Note that many systems divide the sliding window itself into several sub-windows and extract different features within each of the sub-windows [2,20,30,39] In order to incorporate temporal and spatial context into the features, we concatenate 7 consecutive features in a sliding window with maximum overlap, which are later reduced by a PCA transformation matrix to a feature vector x t of dimension 30 (see Figure 2). …”
Section: Feature Extractionmentioning
confidence: 99%
“…In general, the number of Rprop iterations and the choice of the regularization constant C have to be chosen carefully (cf. optimization Table 1 in Section 2.3), and were empirically optimized in informal experiments to 30 Rprop iterations and C = 1.0 (cf. detailed Rprop iteration analysis and convergence without overtraining in Figure 8 and Figure 9).…”
Section: First Pass Decodingmentioning
confidence: 99%
See 1 more Smart Citation
“…Although certain publications regarding Markov-modelbased recognition of isolated characters exist (cf., e.g., [21,56,57,95]), it is at least questionable whether the use of these models is appropriate for such data. Instead, the approach shows its strength especially for sequences.…”
Section: Applicationsmentioning
confidence: 99%
“…Often (local) pixel intensities or pixel density distributions-optionally their average or median values-are considered as some sort of basic features (as, e.g., in [3,40,45,87,117]). Quite frequently, starting from some raw feature set optimized representations are computed by some standard analytic transforms like PCA [28,40,58,95,97,117] LDA [16,58] or function transforms (DCT, FFT, Wavelet, Radon, etc.) [16,30,45].…”
Section: Serialization and Feature Extractionmentioning
confidence: 99%