Machine printed or handwritten character recognition becomes a major research topic in several real time applications. The recent advancements of deep learning and image processing techniques can be employed for printed and handwritten character recognition. Telugu character Recognition (TCR) remains a difficult task in optical character recognition (OCR), which transforms the printed and handwritten characters into respective text formats. In this aspect, this study introduces an effective deep learning based TCR model for printed and handwritten characters (DLTCR-PHWC). The proposed DLTCR-PHWC technique aims to detect and recognize the printed as well as handwritten characters that exist in the same image. Primarily, image pre-processing is performed using the adaptive fuzzy filtering technique. Next, line and character segmentation processes are performed to derive useful regions. In addition, the fusion of EfficientNet and CapsuleNet models is used for feature extraction. Finally, the Aquila optimizer (AO) with bi-directional long short-term memory (BiLSTM) model is utilized for recognition process. A detailed experimentation of the proposed DLTCR-PHWC technique is investigated using Telugu character dataset and the simulation outcome portrayed the supremacy of the proposed DLTCR-PHWC technique over the recent state of art approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.