2004
DOI: 10.1023/b:inrt.0000009438.69013.fa
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Language Evaluation Forum: Objectives, Results, Achievements

Abstract: The Cross-Language Evaluation Forum (CLEF) is now in its fourth year of activity. We summarize the main lessons learned during this period, outline the state-of-the-art of the research reported in the CLEF experiments and discuss the contribution that this initiative has made to research and development in the multilingual information access domain. We also make proposals for future directions in system evaluation aimed at meeting emerging needs.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2005
2005
2015
2015

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 65 publications
(31 citation statements)
references
References 40 publications
0
31
0
Order By: Relevance
“…The document collections for this year's GeoCLEF experiments consists of newspaper and newswire stories from the years 1994 and 1995 used in previous CLEF ad-hoc evaluations [1]. The Portuguese, English and German collections contain stories covering international and national news events, therefore representing a wide variety of geographical regions and places.…”
Section: Document Collections Used In Geoclef 2007mentioning
confidence: 99%
“…The document collections for this year's GeoCLEF experiments consists of newspaper and newswire stories from the years 1994 and 1995 used in previous CLEF ad-hoc evaluations [1]. The Portuguese, English and German collections contain stories covering international and national news events, therefore representing a wide variety of geographical regions and places.…”
Section: Document Collections Used In Geoclef 2007mentioning
confidence: 99%
“…With these new modules, our group is now taking the first steps to include the basic set of components required for serious participation on in this kind of IR task -robust stemming, weighting scheme and blind feedback [3].…”
Section: Improvementsmentioning
confidence: 99%
“…For a general overview of these issues, see [11]). In our approach, when a request was received (in English in this study), we automatically translated it into the desired target languages and then searched for pertinent items within each of the four corpora (English, French, Finnish and Russian).…”
Section: Multilingual Information Retrievalmentioning
confidence: 99%