Proceedings of the Workshop on Multilingual Question Answering - MLQA '06 2006
DOI: 10.3115/1708097.1708104
|View full text |Cite
|
Sign up to set email alerts
|

Keyword translation accuracy and cross-lingual question answering in Chinese and Japanese

Abstract: In this paper, we describe the extension of an existing monolingual QA system for English-to-Chinese and English-to-Japanese cross-lingual question answering (CLQA). We also attempt to characterize the influence of translation on CLQA performance through experimental evaluation and analysis. The paper also describes some language-specific issues for keyword translation in CLQA.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 6 publications
0
4
0
Order By: Relevance
“…In this case, because of the lack of the crucial relevant information, the MT system cannot distinguish between the correct translations of "Mercury" and "first-person" from other words with the same-spelling. This kind of issue in MT has been studied by previous work in MLQA (Mitamura et al, 2006;Ture and Boschee, 2016). In addition, in extractive RC, the answer spans are subphrases in the context paragraph, so generating answer spans by back-translation is not a desirable approach, as it generates homographic variations.…”
Section: Drawback Of the Back-translation Systemmentioning
confidence: 99%
See 1 more Smart Citation
“…In this case, because of the lack of the crucial relevant information, the MT system cannot distinguish between the correct translations of "Mercury" and "first-person" from other words with the same-spelling. This kind of issue in MT has been studied by previous work in MLQA (Mitamura et al, 2006;Ture and Boschee, 2016). In addition, in extractive RC, the answer spans are subphrases in the context paragraph, so generating answer spans by back-translation is not a desirable approach, as it generates homographic variations.…”
Section: Drawback Of the Back-translation Systemmentioning
confidence: 99%
“…It is a common approach in MLQA to translate all non-English text or keyword in queries into English beforehand, and then to treat the task as a monolingual task (Ture and Boschee, 2016;Mitamura et al, 2006;Hartrumpf et al, 2009;Esplà-Gomis et al, 2012).…”
Section: Related Workmentioning
confidence: 99%
“…Ture and Lin described three methods for translating queries into the collection language in a probabilistic manner, improving document retrieval effectiveness over a one-best translation approach (2014). Extending this idea to MLQA appears as a logical next step, yet most prior work relies solely on the one-best translation of questions or answers (Ko et al, 2010b;García-Cumbreras et al, 2012;Chaturvedi et al, 2014), or selects the best translation out of few options (Sacaleanu et al, 2008;Mitamura et al, 2006). Mehdad et al reported improvements by including the top ten translations (instead of the single best) and computing a distance-based entailment score with each (2010).…”
Section: Related Workmentioning
confidence: 99%
“…But in different systems two or adjoining modules may be realized in one module. For example, in JAVELIN question [3,4] analysis and query building are realized in one module. In this paper, AnswerOnWeb realizes answer extraction and answer selection in one module.…”
Section: Related Workmentioning
confidence: 99%