2014
DOI: 10.1109/tpami.2014.2306423
|View full text |Cite
|
Sign up to set email alerts
|

Combining Structure and Parameter Adaptation of HMMs for Printed Text Recognition

Abstract: We present two algorithms that extend existing HMM parameter adaptation algorithms (MAP and MLLR) by adapting the HMM structure. This improvement relies on a smart combination of MAP and MLLR with a structure optimization procedure. Our algorithms are semi-supervised: to adapt a given HMM model on new data, they require little labeled data for parameter adaptation and a moderate amount of unlabeled data to estimate the criteria used for HMM structure optimization. Structure optimization is based on state split… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 53 publications
0
9
0
Order By: Relevance
“…Ait-Mohand et al [9] recently presented an interesting study of mixed-font text recognition using HMMs. The main contribution of the study was related to HMM model length adaptation techniques that were integrated with HMM data adaptation techniques, such as maximum likelihood linear regression (MLLR) and maximum a posteriori (MAP) techniques.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Ait-Mohand et al [9] recently presented an interesting study of mixed-font text recognition using HMMs. The main contribution of the study was related to HMM model length adaptation techniques that were integrated with HMM data adaptation techniques, such as maximum likelihood linear regression (MLLR) and maximum a posteriori (MAP) techniques.…”
Section: Related Workmentioning
confidence: 99%
“…Another approach for addressing mixed fonts was proposed by Ait-Mohand et al [9]. They proposed HMM adaptation techniques in which the adaptation was performed using the HMM data and the model length (number of states).…”
Section: Mixed-font Text Recognitionmentioning
confidence: 99%
See 1 more Smart Citation
“…We use the LITIS OCR based on HMM with variable state number, described in [8]. Since the language is unknown during recognition, this OCR is a language free version working at the character level (without any language model nor dictionary).…”
Section: Language Identification Systemmentioning
confidence: 99%
“…The textual content of each line is decoded using Viterbi decoding without contextual resources as it is the case for standard recognizer (no dictionary, no language model used). A detailed description of the recognition engine is given in [8].…”
Section: Recognition Enginementioning
confidence: 99%