Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval 2019
DOI: 10.1145/3331184.3331190
|View full text |Cite
|
Sign up to set email alerts
|

Document Gated Reader for Open-Domain Question Answering

Abstract: In recent years researchers have achieved considerable success applying neural network methods to question answering (QA). These approaches have achieved state of the art results in simplified closed-domain settings 1 such as the SQuAD (Rajpurkar et al. 2016) dataset, which provides a preselected passage, from which the answer to a given question may be extracted. More recently, researchers have begun to tackle open-domain QA, in which the model is given a question and access to a large corpus (e.g., wikipedia… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
134
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 81 publications
(135 citation statements)
references
References 31 publications
1
134
0
Order By: Relevance
“…DrQA (Chen et al, 2017) 37.7 44.5 41.9 48.7 32.3 38.3 29.8 -R 3 (Wang et al, 2018a) 35.3 41.7 49.0 55.3 47.3 53.7 29.1 37.5 OpenQA (Lin et al, 2018) 42.2 49.3 58.8 64.5 48.7 56.3 28.7 36.6 TraCRNet (Dehghani et al, 2019) 43.2 54.0 52.9 65.1 ----HAS-QA (Pang et al, 2019) 43.2 48.9 62.7 68.7 63.6 68.9 --BERT (Large) (Nogueira et al, 2018 sages with questions, aka inter-sentence matching (Wang and Jiang, 2017;Wang et al, 2016;Seo et al, 2017;Song et al, 2017). However, BERT model simply concatenates a passage with a question, and differentiates them by separating them with a delimiter token [SEP], and assigning different segment ids for them.…”
Section: Effect Of Global Normalizationmentioning
confidence: 99%
See 2 more Smart Citations
“…DrQA (Chen et al, 2017) 37.7 44.5 41.9 48.7 32.3 38.3 29.8 -R 3 (Wang et al, 2018a) 35.3 41.7 49.0 55.3 47.3 53.7 29.1 37.5 OpenQA (Lin et al, 2018) 42.2 49.3 58.8 64.5 48.7 56.3 28.7 36.6 TraCRNet (Dehghani et al, 2019) 43.2 54.0 52.9 65.1 ----HAS-QA (Pang et al, 2019) 43.2 48.9 62.7 68.7 63.6 68.9 --BERT (Large) (Nogueira et al, 2018 sages with questions, aka inter-sentence matching (Wang and Jiang, 2017;Wang et al, 2016;Seo et al, 2017;Song et al, 2017). However, BERT model simply concatenates a passage with a question, and differentiates them by separating them with a delimiter token [SEP], and assigning different segment ids for them.…”
Section: Effect Of Global Normalizationmentioning
confidence: 99%
“…However, the question of proper granularity of passages is still underexplored. Third, passage ranker for selecting high-quality passages has been shown to be very useful in previous open-domain QA systems (Wang et al, 2018a;Lin et al, 2018;Pang et al, 2019). However, we do not know whether it is still required for BERT.…”
Section: Introductionmentioning
confidence: 97%
See 1 more Smart Citation
“…Machine Reading at Scale First proposed and formalized in Chen et al (2017), MRS has gained popularity with increasing amount of work on both dataset collection (Joshi et al, 2017; and MRS model developments (Wang et al, 2018;Clark and Gardner, 2017;Htut et al, 2018). In some previous work , paragraph-level retrieval modules were mainly for improving the recall of required information, while in some other works (Yang et al, 2018), sentence-level retrieval modules were merely for solving the auxiliary sentence selection task.…”
Section: Related Workmentioning
confidence: 99%
“…The typical pipeline of open-domain QA systems (Chen et al, 2017;Wang et al, 2018;Htut et al, 2018) is to first use an IR system to retrieve a compact set of paragraphs and then run a machine reading model over the concatenated or reranked paragraphs. While IR works reasonably well for simple questions 1 , it often fails to retrieve the correct answer paragraph for multi-hop questions.…”
Section: Introductionmentioning
confidence: 99%