Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1609
|View full text |Cite
|
Sign up to set email alerts
|

Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension

Abstract: Reading comprehension models have been successfully applied to extractive text answers, but it is unclear how best to generalize these models to abstractive numerical answers. We enable a BERT-based reading comprehension model to perform lightweight numerical reasoning. We augment the model with a predefined set of executable 'programs' which encompass simple arithmetic as well as extraction. Rather than having to learn to manipulate numbers directly, the model can pick a program and execute it. On the recent … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
54
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 63 publications
(54 citation statements)
references
References 10 publications
(8 reference statements)
0
54
0
Order By: Relevance
“…Hu et al (2019) proposed to predict the number of output spans for each question, and used a non-differentiable inference procedure to find them in the text, leading to a complex training procedure. Andor et al (2019) proposed a Merge operation that merges spans, but is constrained to at most 2 spans. Chen et al (2020) proposed a non-differentiable symbolic approach which outputs programs that compose single-span extractions.…”
Section: Introductionmentioning
confidence: 99%
“…Hu et al (2019) proposed to predict the number of output spans for each question, and used a non-differentiable inference procedure to find them in the text, leading to a complex training procedure. Andor et al (2019) proposed a Merge operation that merges spans, but is constrained to at most 2 spans. Chen et al (2020) proposed a non-differentiable symbolic approach which outputs programs that compose single-span extractions.…”
Section: Introductionmentioning
confidence: 99%
“…The overall experimental results are reported in Table 2, where the performance of baseline methods is obtained from previous work (Dua et al, 2019;Seo et al, 2017;Ran et al, 2019;Andor et al, 2019) and the public leaderboard. 6 The first three methods in Table 2 are based on either semantic parsing or information extraction, and perform poorly on the numerical MRC task.…”
Section: Resultsmentioning
confidence: 99%
“…To improve this method, Ran et al (2019) proposed NumNet, which constructs a number comparison graph that encodes the relative magnitude information between numbers on directed edges. Although NumNet achieves superior performance than other numerically-aware models (Hu et al, 2019a;Andor et al, 2019;Geva et al, 2020;, we argue that NumNet is insufficient for sophisticated numerical reasoning, since it lacks two critical ingredients for numerical reasoning:…”
Section: Introductionmentioning
confidence: 87%
See 2 more Smart Citations