2009 10th International Conference on Document Analysis and Recognition 2009
DOI: 10.1109/icdar.2009.22
|View full text |Cite
|
Sign up to set email alerts
|

Scaling Up Whole-Book Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2010
2010
2016
2016

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(11 citation statements)
references
References 5 publications
0
11
0
Order By: Relevance
“…Words in the dictionary are expanded by appending all their doublets at their end in the same order. Except for those changes, word recognition and iconic model adaptation remain exactly the same as introduced in [5].…”
Section: Methodsmentioning
confidence: 89%
See 3 more Smart Citations
“…Words in the dictionary are expanded by appending all their doublets at their end in the same order. Except for those changes, word recognition and iconic model adaptation remain exactly the same as introduced in [5].…”
Section: Methodsmentioning
confidence: 89%
“…The whole process works this way: in Stage 1, apply iconic model adaptation algorithm as in [5] until it converges; in Stage 2, alternate between adapting linguistic and iconic models multiple times.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Rasagna et al [27] cluster word images in an entire Telugu book using locality sensitive hashing and use this to correct character labels based on majority voting. Xiu and Baird [36], measure the disagreements between OCR results and language models using mutual entropy across a passage. This measure is then used to correct frequent OCR errors, or to add new words to the language model.…”
Section: Previous Workmentioning
confidence: 99%