Coling 2008: Proceedings of the 2nd Workshop on Information Retrieval for Question Answering - IRQA '08 2008
DOI: 10.3115/1641451.1641452
|View full text |Cite
|
Sign up to set email alerts
|

Improving text retrieval precision and answer accuracy in question answering systems

Abstract: Question Answering (QA) systems are often built modularly, with a text retrieval component feeding forward into an answer extraction component. Conventional wisdom suggests that, the higher the quality of the retrieval results used as input to the answer extraction module, the better the extracted answers, and hence system accuracy, will be. This turns out to be a poor assumption, because text retrieval and answer extraction are tightly coupled. Improvements in retrieval quality can be lost at the answer extra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 9 publications
0
6
0
Order By: Relevance
“…Along with our relational representation this appears to be the key aspect for the high improvement over basic models. Finally, models for factoid questions using linguistic structures has been carried out in [2,3]. Again, the proposed methods rely on manual design of features whereas our approach is more general for passages and/or answer re-ranking.…”
Section: Related Workmentioning
confidence: 99%
“…Along with our relational representation this appears to be the key aspect for the high improvement over basic models. Finally, models for factoid questions using linguistic structures has been carried out in [2,3]. Again, the proposed methods rely on manual design of features whereas our approach is more general for passages and/or answer re-ranking.…”
Section: Related Workmentioning
confidence: 99%
“…In this context, Bilotti and Nyberg [8] emphasize that proponents of the modular architecture naturally view the question-answering task as decomposable, and to a certain extent, it is. The modules, however, can never be fully decoupled, because question analysis and answer extraction components, at least, depend on a common representation for answers and perhaps also a common set of text processing tools.…”
Section: Typical Architecture Of a Question-answering Systemmentioning
confidence: 99%
“…In our case, the expected answer type refers to the named entities returned by ArNER, only paragraphs that contain a Name Entity of the same type that the expected answer type are validated. Hence, the named-entity answer extraction method selects any candidate answer that is an instance of the expected answer type [8]. In fact, almost all the Arabic question-answering systems involve keywords extraction.…”
Section: Table 3: Mapping Question Type To Expected Answer Typementioning
confidence: 99%
“…There is also a very substantial body of work in scoring passages for relevance to a query, e.g., [22][23][24]. This task is not quite the same as scoring a specific candidate answer, but there is a considerable overlap in the kinds of techniques that are relevant to these tasks.…”
Section: Related Workmentioning
confidence: 99%