2005
DOI: 10.1007/11519645_38
|View full text |Cite
|
Sign up to set email alerts
|

Overview of the CLEF 2004 Multilingual Question Answering Track

Abstract: Abstract. The general aim of the third CLEF Multilingual Question Answering Track was to set up a common and replicable evaluation framework to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents in several European languages. Nine target languages and ten source languages were exploited to enact 8 monolingual and 73 cross-language tasks. Twenty-four groups participated in the exercise. Overall results showed a general increase in performance in compariso… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2005
2005
2011
2011

Publication Types

Select...
9
1

Relationship

3
7

Authors

Journals

citations
Cited by 55 publications
(13 citation statements)
references
References 6 publications
0
13
0
Order By: Relevance
“…Question Answering (QA) systems have been proposed as a feasible option for the creation of such mechanisms. Moreover, the research in this field shows a constant growth both in interest as well as in complexity [3]. This paper presents the prototype developed in the Language Technologies Laboratory at INAOE 1 for the Spanish monolingual QA evaluation task at CLEF 2005.…”
Section: Introductionmentioning
confidence: 99%
“…Question Answering (QA) systems have been proposed as a feasible option for the creation of such mechanisms. Moreover, the research in this field shows a constant growth both in interest as well as in complexity [3]. This paper presents the prototype developed in the Language Technologies Laboratory at INAOE 1 for the Spanish monolingual QA evaluation task at CLEF 2005.…”
Section: Introductionmentioning
confidence: 99%
“…Section 3 presents some criteria to design the evaluation measure and presents the K and K1 measures. The results for the Main QA Track at CLEF [3] are taken as a case of study to discuss and compare these measures with the previous ones used at TREC, NTCIR and CLEF. Section 4 presents the results obtained by the participants in the Pilot Task and, finally, Section 5 points out some conclusions and future work.…”
Section: Introductionmentioning
confidence: 99%
“…This process was as challenging as any translation job can be, since many cultural discrepancies and misunderstanding easily creep in. Anyway, as was already pointed out in 2004 "he fact that manual translation captured some of the cross-cultural as well as cross-language problems is good since QA systems are designed to work in the real world" [3].…”
Section: The Evaluation Exercisementioning
confidence: 93%