2018
DOI: 10.1609/aaai.v32i1.12035
|View full text |Cite
|
Sign up to set email alerts
|

S-Net: From Answer Extraction to Answer Synthesis for Machine Reading Comprehension

Abstract: In this paper, we present a novel approach to machine reading comprehension for the MS-MARCO dataset. Unlike the SQuAD dataset that aims to answer a question with exact text spans in a passage, the MS-MARCO dataset defines the task as answering a question from multiple passages and the words in the answer are not necessary in the passages. We therefore develop an extraction-then-synthesis framework to synthesize answers from extraction results. Specifically, the answer extraction model is first employed to pre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(10 citation statements)
references
References 9 publications
0
10
0
Order By: Relevance
“…In our reviewed papers, 32% of works focus on non-factoid questions. Because of their difficulty, the systems dealing with non-factoid questions have often lower accuracies (Wang et al 2016;Tan et al 2018a;Wang et al 2018a).…”
Section: Mrc Systems Inputmentioning
confidence: 99%
See 2 more Smart Citations
“…In our reviewed papers, 32% of works focus on non-factoid questions. Because of their difficulty, the systems dealing with non-factoid questions have often lower accuracies (Wang et al 2016;Tan et al 2018a;Wang et al 2018a).…”
Section: Mrc Systems Inputmentioning
confidence: 99%
“…In the abstractive mode, the answer is not necessarily an exact span in the context and is generated according to the question and context. This output type is especially suitable for non-factoid questions (Greco et al 2016;Tan et al 2018a).…”
Section: Mrc Systems Outputmentioning
confidence: 99%
See 1 more Smart Citation
“…Answer extraction Tan et al (2018) develop an extraction-then-synthesis framework to synthesize answers from extraction results. Specifically, the answer extraction model is first employed to predict the most important sub-spans from the passage, then the answer synthesis model takes the subspans as additional features along with the question and passage to further elaborate the final answers.…”
Section: Related Workmentioning
confidence: 99%
“…However this might not be a good solution for MRC when there is a need of generating additional text not included in the passage or the question, augmenting information in multiple passage spans and the question as and when required. The work from (Weston et al, 2014), (Tan et al, 2017), (Cui et al, 2016) are examples of this change of paradigm. An interesting adaptation involves using single or multiple turns of reasoning to effectively exploit the relation among queries, documents, and answers.…”
Section: Related Workmentioning
confidence: 99%