Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.292
|View full text |Cite
|
Sign up to set email alerts
|

Answering Open-Domain Questions of Varying Reasoning Steps from Text

Abstract: We develop a unified system to answer directly from text open-domain questions that may require a varying number of retrieval steps. We employ a single multi-task transformer model to perform all the necessary subtasks-retrieving supporting facts, reranking them, and predicting the answer from all retrieved documents-in an iterative fashion. We avoid crucial assumptions of previous work that do not transfer well to real-world settings, including exploiting knowledge of the fixed number of retrieval steps requi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 29 publications
0
8
0
Order By: Relevance
“…Step Execution (EX) Model. Similar to prior work (Talmor and Berant, 2018;Min et al, 2019b;Qi et al, 2021;, this model performs explicit, step-by-step multihop reasoning, by first decomposing the Q into a DAG G Q having single-hop questions, and then calling single-hop repeatedly to execute this decomposition. The decomposer is trained with gold decompositions, and is implemented with BART-large.…”
Section: Multihop Modelsmentioning
confidence: 99%
“…Step Execution (EX) Model. Similar to prior work (Talmor and Berant, 2018;Min et al, 2019b;Qi et al, 2021;, this model performs explicit, step-by-step multihop reasoning, by first decomposing the Q into a DAG G Q having single-hop questions, and then calling single-hop repeatedly to execute this decomposition. The decomposer is trained with gold decompositions, and is implemented with BART-large.…”
Section: Multihop Modelsmentioning
confidence: 99%
“…Given a question q, the retriever finds sequences of supporting documents (paths) of length n that can be used to answer the question. At each step of the retrieval process we use an inexpensive retrieval method to identify a small set of promising candidates to narrow down the search space, as commonly done in the literature (Qi et al, 2021;Asai et al, 2020). We use an LM to more accurately rerank n-hop chains of documents based on their relevance to the question (described in Section 2.2).…”
Section: Overviewmentioning
confidence: 99%
“…Asai et al (2020) combined TF-IDF retriever with a recurrent graph retriever and used the reader module to re-rank paths based on the answer confidence. Qi et al (2021) used a single transformer model to perform retrieval, reranking, and reading in an iterative fashion. However, the good performance of previous work comes mainly from training on a large number of examples and are likely to fail in low-data settings.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…(Harman, 1993), SQuAD (Rajpurkar et al, 2018), NewsQA (Trischler et al, 2017), SearchQA (Dunn et al, 2017), and QuAC (Choi et al, 2018), and intensive efforts were made to build new models that surpass the human performance on these datasets, including the pre-trained language models (Devlin et al, 2019;Yang et al, 2019a) or the ensemble models that outperform the human, in particular on SQuAD (Lan et al, 2020;Yamada et al, 2020;. More challenging datasets are also introduced, which require several reasoning steps to answer (Yang et al, 2018;Qi et al, 2021), the understanding of a much larger context (Kočiský et al, 2018) or the understanding of the adversarial content and numeric reasoning (Dua et al, 2019).…”
Section: Introductionmentioning
confidence: 99%