2018
DOI: 10.1007/978-981-13-1280-9_15
|View full text |Cite
|
Sign up to set email alerts
|

Multi-lingual Text Localization from Camera Captured Images Based on Foreground Homogenity Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…This method is particularly susceptible to blurring and parameter change, capturing only text components. Dutta et al [39] provide an alternate method to address these concerns, in which foreground homogeneity is accomplished by binning or grouping grey levels to segregate text components according to their stability and pixel density.Additional attempts have been made by Chakraborty et al [40] and Panda et al [41] to investigate the variation of the MSER parameters with reference to image dimension and text region dimension, thereby building a connection that dynamically changes MSER. The SWT concept was established by Ephstein et al [42] using Canny edge detection and subsequent transform on the edge image, which outputs the most probable distance and breadth of stroke for each pixel.…”
Section: Related Studymentioning
confidence: 99%
“…This method is particularly susceptible to blurring and parameter change, capturing only text components. Dutta et al [39] provide an alternate method to address these concerns, in which foreground homogeneity is accomplished by binning or grouping grey levels to segregate text components according to their stability and pixel density.Additional attempts have been made by Chakraborty et al [40] and Panda et al [41] to investigate the variation of the MSER parameters with reference to image dimension and text region dimension, thereby building a connection that dynamically changes MSER. The SWT concept was established by Ephstein et al [42] using Canny edge detection and subsequent transform on the edge image, which outputs the most probable distance and breadth of stroke for each pixel.…”
Section: Related Studymentioning
confidence: 99%
“…The visual images provide accurate and appropriate details for oblivious direction-finding, image perceptive, and retrieval methods, respectively [13]. This complex image frequently incorporates a variety of fonts and other properties [2].…”
Section: Introductionmentioning
confidence: 99%
“…The visual images provide accurate and appropriate details for oblivious direction-finding, image perceptive, and retrieval methods, respectively [9]. This complex image frequently incorporates a variety of fonts and other properties [10].…”
Section: Introductionmentioning
confidence: 99%