Whole-book recognition is a document image analysis strategy that operates on the complete set of a book's page images, attempting to improve accuracy by automatic unsupervised adaptation. Our algorithm expects to be given initial iconic and linguistic models-derived from (generally errorful) OCR results and (generally incomplete) dictionariesand then, guided entirely by evidence internal to the test set, the algorithm corrects the models yielding improved accuracy. We have found that successful corrections are often closely associated with "disagreements" between the models which can be detected within the test set by measuring cross entropy between (a) the posterior probability distribution of character classes (the recognition results from image classification alone), and (b) the posterior probability distribution of word classes (the recognition results from image classification combined with linguistic constraints). We report experiments on long passages (up to 180 pages) revealing that: (1) disagreements and error rates are strongly correlated; (2) our algorithm can drive down recognition error rates by nearly an order of magnitude; and (3) the longer the passage, the lower the error rate achievable. We also propose formal models for a book's text, for iconic and linguistic constraints, and for our whole-book recognition algorithmand, using these, we rigorously prove sufficient conditions for the whole-book recognition strategy to succeed in the ways illustrated in the experiments.
The research on offline handwritten Arabic character recognition has received more and more attention in recent years, because of the increasing needs of Arabic document digitization. The variation in Arabic handwriting brings great difficulty in character segmentation and recognition, eg., the subparts (diacritics) of the Arabic character may shift away from the main part. In this paper, a new probabilistic segmentation model is proposed. First, a contourbased over-segmentation method is conducted, cutting the word image into graphemes. The graphemes are sorted into 3 queues, which are character main parts, sub-parts (diacritics) above or below main parts respectively. The confidence for each character is calculated by the probabilistic model, taking into account both of the recognizer output and the geometric confidence besides with logical constraint. Then, the global optimization is conducted to find optimal cutting path, taking weighted average of character confidences as objective function. Experiments on handwritten Arabic documents with various writing styles show the proposed method is effective.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.