Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1456
|View full text |Cite
|
Sign up to set email alerts
|

A Nil-Aware Answer Extraction Framework for Question Answering

Abstract: Recently, there has been a surge of interest in reading comprehension-based (RC) question answering (QA). However, current approaches suffer from an impractical assumption that every question has a valid answer in the associated passage. A practical QA system must possess the ability to determine whether a valid answer exists in a given text passage. In this paper, we focus on developing QA systems that can extract an answer for a question if and only if the associated passage contains an answer. If the associ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
26
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(27 citation statements)
references
References 14 publications
1
26
0
Order By: Relevance
“…There has been little work addressing the predictive uncertainty issue of MRC. Note that there have been some studies tackling the unanswerable question (i.e., null answer) problem in MRC [20,23,34], which is different from the predictive uncertainty issue we discuss here.…”
Section: Introductionmentioning
confidence: 89%
See 1 more Smart Citation
“…There has been little work addressing the predictive uncertainty issue of MRC. Note that there have been some studies tackling the unanswerable question (i.e., null answer) problem in MRC [20,23,34], which is different from the predictive uncertainty issue we discuss here.…”
Section: Introductionmentioning
confidence: 89%
“…A common solution is to add a null position to the (a) A correct answer for the query "who invented the telegraph". passage [23,38]. Some work also introduced an additional model to detect those unanswerable questions [20].…”
Section: Machine Reading Comprehensionmentioning
confidence: 99%
“…For instance, Jia and Liang (2017) demonstrate a drop of 35–75% in F1 scores of 16 models for the reading comprehension task trained over SQuAD (Rajpurkar, Zhang, Lopyrev, & Liang, 2016), by adversarially adding another sentence to the input paragraph (from which the system has to select the relevant span, given the question). Following, a new version of the aforementioned data set was released comprising of unanswerable questions (Rajpurkar, Jia, & Liang, 2018), leading to more robust reading comprehension approaches (Hu et al, 2018; Kundu & Ng, 2018). To the best of our knowledge, there has not been any work quantifying or improving the robustness of KGQA models.…”
Section: Emerging Trendsmentioning
confidence: 99%
“…Recent work shows that it is possible to determine lack of evidence with greater confidence by explicitly modeling for it. Works of Zhong et al (2019) and Kundu and Ng (2018) demonstrate model designs with specialized deep learning architectures that encode evidence in the input and show significant improvement in identifying unanswerable questions. This paper first introduces a baseline that is based on a language model.…”
Section: Introductionmentioning
confidence: 99%