2006
DOI: 10.1007/s10032-006-0014-0
|View full text |Cite
|
Sign up to set email alerts
|

Object count/area graphs for the evaluation of object detection and segmentation algorithms

Abstract: Evaluation of object detection algorithms is a non-trivial task: a detection result is usually evaluated by comparing the bounding box of the detected object with the bounding box of the ground truth object. The commonly used precision and recall measures are computed from the overlap area of these two rectangles. However, these measures have several drawbacks: they don't give intuitive information about the proportion of the correctly detected objects and the number of false alarms, and they cannot be accumul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
145
0
1

Year Published

2010
2010
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 313 publications
(146 citation statements)
references
References 20 publications
0
145
0
1
Order By: Relevance
“…The main question when choosing a method for scene text detection particularly is how to deal with under and over segmentation errors. In this competition, we employ the method by Wolf et al [15] that is specifically designed to evaluate scene text detection approaches. We used the DetEval 2 evaluation software with default parameters for evaluating the competing methods.…”
Section: B Performance Evaluation 1) Text Localization Taskmentioning
confidence: 99%
See 1 more Smart Citation
“…The main question when choosing a method for scene text detection particularly is how to deal with under and over segmentation errors. In this competition, we employ the method by Wolf et al [15] that is specifically designed to evaluate scene text detection approaches. We used the DetEval 2 evaluation software with default parameters for evaluating the competing methods.…”
Section: B Performance Evaluation 1) Text Localization Taskmentioning
confidence: 99%
“…The concept of scene text recognition challenge is similar to ICDAR 2003 and ICDAR 2005 Robust Reading competitions. Some problems were reported about the dataset (slightly larger bounding boxes, inconsistent definition of a "word" for instance whether a hyphen breaks the words or not) as well as the evaluation scheme (handling of one-to-many and manyto-one matches) used in the previous competitions [15]. We created ground-truth of the ICDAR 2003 Robust Reading competition dataset from scratch and adapted a new evaluation scheme [15] to resolve these issues.…”
Section: Introductionmentioning
confidence: 99%
“…We used (Wolf and Jolion, 2006) to evaluate our results following the approach of the ICDAR competition 2011 (Karatzas et al, 2011). We used the area precision thresholds proposed in the original publication: t p = 0.4 and we decreased the area recall threshold from t r = 0.8 to t r = 0.6 in order to be more flexible with accent and punctuation miss detections (e.g.À,È,É, ..., !, ?)…”
Section: Discussionmentioning
confidence: 99%
“…Using the ICDAR 2011 competition evaluation scheme [17], the method achieves recall 67.6%, precision 81.1% and fmeasure 75.4% in text localization (see Figure 6 for sample outputs). This represents a significant 4 percentage point improvement over the best published result (the ICDAR 2011 Robust Reading competition winner [14]) and a 7 percentage point improvement over the NM12 method [11], which demonstrates the impact of novel contributions presented in this paper.…”
Section: Methodsmentioning
confidence: 99%