2008
DOI: 10.1007/978-3-540-85760-0_2
|View full text |Cite
|
Sign up to set email alerts
|

CLEF 2007: Ad Hoc Track Overview

Abstract: Abstract. We describe the objectives and organization of the CLEF 2005 ad hoc track and discuss the main characteristics of the tasks offered to test monolingual, bilingual, and multilingual textual document retrieval. The performance achieved for each task is presented and a statistical analysis of results is given. The mono-and bilingual tasks followed the pattern of previous years but included target collections for two new-to-CLEF languages: Bulgarian and Hungarian. The multilingual tasks concentrated on e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2008
2008
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(14 citation statements)
references
References 30 publications
0
14
0
Order By: Relevance
“…We chose the experiment results conducted by Di Nunzio et al [19] as a baseline to compare our obtained results. Their results are the findings of some experiments officially presented to the CLEF and were also performed using the Hamshahri collection.…”
Section: Obtained Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We chose the experiment results conducted by Di Nunzio et al [19] as a baseline to compare our obtained results. Their results are the findings of some experiments officially presented to the CLEF and were also performed using the Hamshahri collection.…”
Section: Obtained Resultsmentioning
confidence: 99%
“…Their results are the findings of some experiments officially presented to the CLEF and were also performed using the Hamshahri collection. For the first comparison, we considered one of the experiments in which any stemmer application was used in the Hamshahri documents and the input text for searching is the title text within the topic fields [19]. In Table 6, we have the 11-point interpolated precision–recall values over all queries for the three experiments.…”
Section: Obtained Resultsmentioning
confidence: 99%
“…The robust task is essentially an ad-hoc task which re-uses the topics and collections from past CLEF editions [9].…”
Section: Discussionmentioning
confidence: 99%
“…It is also worth to mention the non-European languages multi-lingual IR track at CLEF where queries in Amharic language were used; although document collections were in European languages Di Nunzio et al, 2007]. For Amharic-English IR, one of the most successful approaches was a dictionary-based one [Argaw et al, 2005].…”
Section: Corporamentioning
confidence: 99%