2007
DOI: 10.1007/978-3-540-74999-8_3
|View full text |Cite
|
Sign up to set email alerts
|

CLEF 2006: Ad Hoc Track Overview

Abstract: Abstract. We describe the objectives and organization of the CLEF 2007 Ad Hoc track and discuss the main characteristics of the tasks offered to test monolingual and cross-language textual document retrieval systems. The track was divided into two streams. The main stream offered mono-and bilingual tasks on target collections for central European languages (Bulgarian, Czech and Hungarian). Similarly to last year, a bilingual task that encouraged system testing with non-European languages against English docume… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2007
2007
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 17 publications
(1 citation statement)
references
References 35 publications
0
1
0
Order By: Relevance
“…Regarding the evaluation corpus, the document collection to be used is the so-called LA Times 94 (56,472 documents, 154 MB), previously employed in the robust task of the ad hoc track of CLEF 2006 (Di Nunzio et al, 2006), which reused queries from previous CLEF Initiative (2015) events. The other English sub-collection, the so-called Glasgow Herald 95, could not be used because having been introduced later than the LA Times 94, it does not provide relevance references (the socalled qrel files) for most queries.…”
Section: Evaluation Frameworkmentioning
confidence: 99%
“…Regarding the evaluation corpus, the document collection to be used is the so-called LA Times 94 (56,472 documents, 154 MB), previously employed in the robust task of the ad hoc track of CLEF 2006 (Di Nunzio et al, 2006), which reused queries from previous CLEF Initiative (2015) events. The other English sub-collection, the so-called Glasgow Herald 95, could not be used because having been introduced later than the LA Times 94, it does not provide relevance references (the socalled qrel files) for most queries.…”
Section: Evaluation Frameworkmentioning
confidence: 99%