2021
DOI: 10.48550/arxiv.2109.00799
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MWPToolkit: An Open-Source Framework for Deep Learning-Based Math Word Problem Solvers

Abstract: Developing automatic Math Word Problem (MWP) solvers has been an interest of NLP researchers since the 1960s. Over the last few years, there are a growing number of datasets and deep learning-based methods proposed for effectively solving MWPs. However, most existing methods are benchmarked solely on one or two datasets, varying in different configurations, which leads to a lack of unified, standardized, fair, and comprehensive comparison between methods. This paper presents MWPToolkit, the first open-source f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(15 citation statements)
references
References 22 publications
0
9
0
Order By: Relevance
“…GroupAttn (Li et al, 2019) 21.5 BERT-BERT (Lan et al, 2021) 24.8 Roberta-Roberta (Lan et al, 2021) 30.3 S2T/G2T GTS * (Xie and Sun, 2019) 30.8 Graph2Tree 36.5 BERT-Tree 32.4 Roberta-GTS (Patel et al, 2021) 41.0 Roberta-Graph2Tree (Patel et al, 2021) 43 from baselines and our models, the choice of encoder appear to be important for solving questions in SVAMP -the results on using Roberta as the encoder are particularly striking. Our best variant ROBERTA-DEDUCTREASONER achieves an accuracy score of 47.3 and is able to outperfrom the best baseline (Roberta-Graph2Tree) by 3.5 points (p < 0.01).…”
Section: S2smentioning
confidence: 86%
See 3 more Smart Citations
“…GroupAttn (Li et al, 2019) 21.5 BERT-BERT (Lan et al, 2021) 24.8 Roberta-Roberta (Lan et al, 2021) 30.3 S2T/G2T GTS * (Xie and Sun, 2019) 30.8 Graph2Tree 36.5 BERT-Tree 32.4 Roberta-GTS (Patel et al, 2021) 41.0 Roberta-Graph2Tree (Patel et al, 2021) 43 from baselines and our models, the choice of encoder appear to be important for solving questions in SVAMP -the results on using Roberta as the encoder are particularly striking. Our best variant ROBERTA-DEDUCTREASONER achieves an accuracy score of 47.3 and is able to outperfrom the best baseline (Roberta-Graph2Tree) by 3.5 points (p < 0.01).…”
Section: S2smentioning
confidence: 86%
“…and Li et al (2020) adopted a graph-to-tree approach to model the quantity relations using the graph convolutional networks (GCN) (Kipf and Welling, 2017). Applying pre-trained language models such as BERT (Devlin et al, 2019) was shown to significantly benefit the tree expression generation (Lan et al, 2021;Tan et al, 2021;Shen et al, 2021). Different from the tree-based generation models, our work is related to deductive systems (Shieber et al, 1995;Nederhof, 2003) where we aim to obtain step-by-step expressions.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…many natural language processing tasks, it was shown that transformers and other large language models like GPT do not perform well on math word problems and mathematical reasoning [6,7] A number of systems have used sequenceto-tree and graph-to-tree neural networks generate arithmetic expression trees from input questions and used those to calculate answers. The main targets have been the Math23k and MAWPS data sets which contain more than 23,000 math word problems [8,11,19,22,12,21,15,10]. Similar techniques have been used to tackle more advanced problems.…”
Section: Introductionmentioning
confidence: 99%