A trend towards capturing or filming images using cellphone and sharing images on social media is a part and parcel of day to day activities of humans. When an image is forwarded several times in social media it may be distorted a lot due to several different devices. This work deals with text detection from such distorted images. In this work, we consider images pass through three mobile devices on WhatsApp social media, which results in four images (including the original image) Unlike the existing methods that aim at developing new ways, we utilize the results detected by the existing ones to improve performances. The proposed method extracts Hu moments and fuzzy logic from detected texts of images. The similarity between text detection results given by three existing text detection methods is studied for determining the best pair of texts. The same similarity estimation is then used in a novel way to remove extra background or nontexts and restoring missing text information. Experimental results on own dataset and benchmark datasets of natural scene images, namely, MSRA-TD500, ICDAR2017-MLT, Total-Text, CTW1500 dataset and COCO datasets, show that the proposed method outperforms the existing methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.