Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.10
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multihop Question-Answering

Abstract: Despite the rapid progress in multihop question-answering (QA), models still have trouble explaining why an answer is correct, with limited explanation training data available to learn from. To address this, we introduce three explanation datasets in which explanations formed from corpus facts are annotated. Our first dataset, eQASC, contains over 98K explanation annotations for the multihop question answering dataset QASC, and is the first that annotates multiple candidate explanations for each answer. The se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 34 publications
(48 citation statements)
references
References 19 publications
0
48
0
Order By: Relevance
“…Abduction CQ → fm Given C and an unprovable fact Q, identify a new fact fm that, when added to C, would make Q true. erate human-style justifications, which again are typically supporting evidence rather than a fullyformed line of reasoning, and without explicit reasoning rules (Camburu et al, 2018;Jhamtani and Clark, 2020;Inoue et al, 2020). In contrast, ProofWriter produces a deductive chain of reasoning from what is known to what is concluded, using a transformer retrained to reason systematically.…”
Section: Related Workmentioning
confidence: 99%
“…Abduction CQ → fm Given C and an unprovable fact Q, identify a new fact fm that, when added to C, would make Q true. erate human-style justifications, which again are typically supporting evidence rather than a fullyformed line of reasoning, and without explicit reasoning rules (Camburu et al, 2018;Jhamtani and Clark, 2020;Inoue et al, 2020). In contrast, ProofWriter produces a deductive chain of reasoning from what is known to what is concluded, using a transformer retrained to reason systematically.…”
Section: Related Workmentioning
confidence: 99%
“…Structured Explanations: There is useful previous work on developing interpretable and explainable models (Doshi-Velez and Kim, 2017;Rudin, 2019;Hase and Bansal, 2020;Jacovi and Goldberg, 2020) for NLP. Explanations in NLP take three major forms -(1) extractive rationales or highlights (Zaidan et al, 2007;Lei et al, 2016;Yu et al, 2019;DeYoung et al, 2020) where a subset of the input text explain a prediction, (2) free-form or natural language explanations (Camburu et al, 2018;Rajani et al, 2019;Zhang et al, 2020;Kumar and Talukdar, 2020) that are not constrained to the input, and (3) structured explanations that range from semi-structured text (Ye et al, 2020) to chain of facts (Khot et al, 2020;Jhamtani and Clark, 2020;Gontier et al, 2020) to explanation graphs (based on edges between chains of facts) (Jansen et al, 2018;Jansen and Ustalov, 2019;Xie et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…There is a recent explosion of explanation-centred datasets for multi-hop question answering (Jhamtani and Clark, 2020;Xie et al, 2020;Jansen et al, 2018;Yang et al, 2018;Thayaparan et al, 2020;Wiegreffe and Marasović, 2021). However, most of these datasets require the aggregation of only two sentences or paragraphs, making it hard to evaluate the robustness of the models in terms of semantic drift.…”
Section: Many-hop Multi-hop Training Datamentioning
confidence: 99%