2013 12th Mexican International Conference on Artificial Intelligence 2013
DOI: 10.1109/micai.2013.28
|View full text |Cite
|
Sign up to set email alerts
|

Image Processing for Automatic Reading of Electro-Mechanical Utility Meters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 8 publications
0
6
0
Order By: Relevance
“…In [16], the authors also employed handcrafted features for dial recognition. In addition to binarization and line intersection, the counter region was detected using Scale-Invariant Feature Transform (SIFT) features.…”
Section: B Dial Meter Readingmentioning
confidence: 99%
See 1 more Smart Citation
“…In [16], the authors also employed handcrafted features for dial recognition. In addition to binarization and line intersection, the counter region was detected using Scale-Invariant Feature Transform (SIFT) features.…”
Section: B Dial Meter Readingmentioning
confidence: 99%
“…), dirt in the region of interest, and taken at a distance. In addition, most approaches are based on handcrafted features [11], [15], and were evaluated exclusively on private datasets [10], [11], [14]- [16]. To the best of our knowledge, there are no public datasets containing dial meter images in the literature.…”
Section: Introductionmentioning
confidence: 99%
“…Ocampo-Vega, et al [4] introduce a methodology based on image processing and segmentation to enable the image acquisition and processing of pointer dials type energy meter to obtain efficient and accurate meter readings.…”
Section: Fig-2 Flowchart Of the Text Recognition In Ammeter Dial Platementioning
confidence: 99%
“…In Reference [4], the image processing procedure is divided into four steps including image preprocessing, target region positioning, character segmentation and character recognition. Only for this last step, the authors use a neural network.…”
Section: Introductionmentioning
confidence: 99%