2004
DOI: 10.1007/978-3-540-27814-6_31
|View full text |Cite
|
Sign up to set email alerts
|

The CLEF Cross Language Image Retrieval Track (ImageCLEF) 2004

Abstract: Abstract. In this paper we describe ImageCLEF 1 , the cross language image retrieval track of the Cross Language Evaluation Forum (CLEF 3 ). We instigated and ran a pilot experiment in 2003 where participants submitted entries for an ad hoc bilingual image retrieval task on a collection of historic photographs from St. Andrews University Library. This was designed to simulate the situation in which users would express their search request in natural language but require visual documents in return. For 2004 we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0
1

Year Published

2005
2005
2006
2006

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 48 publications
(40 citation statements)
references
References 18 publications
0
39
0
1
Order By: Relevance
“…As a benchmark project, ImageCLEF is more and more wellknown with its open data platform [12]. And the database used in our experiments is coming from ImageCLEF 2005.…”
Section: Experiments and Results Analysismentioning
confidence: 99%
“…As a benchmark project, ImageCLEF is more and more wellknown with its open data platform [12]. And the database used in our experiments is coming from ImageCLEF 2005.…”
Section: Experiments and Results Analysismentioning
confidence: 99%
“…the set of documents to be searched is known prior to retrieval, but the search requests are not Given these general categories (and others), topics were created by refinement based on attributes such as name of photographer, date and location. A list of topic titles can be found in [13]. These are typical of retrieval requests from picture archives where semantic knowledge is required in addition to the image itself to perform retrieval.…”
Section: Ad Hoc Retrieval From the St Andrews Collectionmentioning
confidence: 99%
“…This resulted in 30-35 images being chosen. One of the authors then used these images for query-by-example searches to find further images in the database resembling the query using feedback and the case notes and selected 26 of these for the final topic set (see [14], [13]). Similar to the ad hoc task, participants were free to use any method for retrieval, but were asked to identify their runs against three main query dimensions: with and without relevance feedback, visual vs. visual+text, and manual vs. automatic.…”
Section: Medical Retrieval From Casimagementioning
confidence: 99%
“…The St Andrew's documents are composed of images and text annotations as described in [3]. The search topics are similarly composed of a search image and text description.…”
Section: Text and Image Combination Runsmentioning
confidence: 99%