2000
DOI: 10.1007/3-540-40053-2_39
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking for Content-Based Visual Information Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2000
2000
2009
2009

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(24 citation statements)
references
References 17 publications
0
24
0
Order By: Relevance
“…To evaluate the effectiveness of the proposed framework, experiments were performed on a generalpurpose image database with 3000 images collected from COREL and IAPR image collection [13]. This database contains diverse images of 15 manually assigned semantical categories (Mountain, Beach, Archi- For a quantitative evaluation, the performances of individual and fusion-based similarity measures are compared based on average precision curves by evaluating top 200 returned results.…”
Section: Resultsmentioning
confidence: 99%
“…To evaluate the effectiveness of the proposed framework, experiments were performed on a generalpurpose image database with 3000 images collected from COREL and IAPR image collection [13]. This database contains diverse images of 15 manually assigned semantical categories (Mountain, Beach, Archi- For a quantitative evaluation, the performances of individual and fusion-based similarity measures are compared based on average precision curves by evaluating top 200 returned results.…”
Section: Resultsmentioning
confidence: 99%
“…The most interesting of these, and in some sense the closest to our own work, are the papers by Muller et al 5 and Leung and Ip. 6 Unlike their proposals, however, our paper presents an implementation of a CBIR benchmark together with its attendant methodology. As far as we are aware, our benchmark methodology differs significantly from anything that has been proposed previously.…”
Section: Introductionmentioning
confidence: 91%
“…The advent of Webbased image retrieval sytems [20][21][22] suggests this focus on algorithmic performance has become too narrow. 5,6 Our philosophy is captured in the choice of name for the first implementation of such a benchmark viz., BIRDS-I an acronym for Benchmark for Image Retrieval using Distributed Systems over the Internet. This name is intended to invoke the dual notions of: 1.…”
Section: Systems Perspectivementioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, it may be justified to use an extensive repertoire of measures, especially when comparing two considerably different systems. Recommendations for selections of benchmark measures are provided by Smith (1998), Leung and Ip (2000), and Müller et al (2001).…”
Section: Evaluation Measuresmentioning
confidence: 99%