This paper presents a new framework for OCR error detection, which uses a conditional random field model to combine rich features from multiple sources, including confusion networks (c-nets), lexical local context and recurrent neural network language model (RNNLM) 1 . We propose a novel, efficient method for computing character-level c-net based RNNLM scores by using dynamic programming and c-net partial unfolding. Our experiments show that our error detection model has consistent observable improvements over a high baseline employed by our current OCR demo system, as measured by average precision and detection error trade-off curve on two test sets of Chinese image documents. Both linguistic and recognition features contribute to the high performance, with the former especially informative. In addition, we show that the new feature we proposed, the c-net RNNLM feature, plays a remarkable beneficial role in improving error detection rate. These results suggest that applications on top of image text recognition can benefit substantially from a hybrid strategy that combines techniques from optical character recognition and natural language processing.