Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.410
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Reason Deductively: Math Word Problem Solving as Complex Relation Extraction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 32 publications
(28 citation statements)
references
References 0 publications
0
28
0
Order By: Relevance
“…Following Jie et al (2022), we also adapt a rationalizer to update the representations of all quantities and intermediate expressions at the end of each step. This module is crucial because if the representations are not updated, those expressions that were initially highly ranked would always be preferred.…”
Section: Rationalizermentioning
confidence: 99%
See 4 more Smart Citations
“…Following Jie et al (2022), we also adapt a rationalizer to update the representations of all quantities and intermediate expressions at the end of each step. This module is crucial because if the representations are not updated, those expressions that were initially highly ranked would always be preferred.…”
Section: Rationalizermentioning
confidence: 99%
“…RoBERTa-GTS and RoBERTa-Graph2Tree replace the original encoder of GTS and Graph2Tree with RoBERTa. The most similar work of our approach is RoBERTa-DeductiveReasoner (Jie et al, 2022), which uses RoBERTa as the encoder and decodes the math- ematical expression using a bottom-up approach. However, it encodes the problem text as a whole without any special attention to the question text and fails to preserve any mathematical law when computing the representation for each candidate expression.…”
Section: Baselinesmentioning
confidence: 99%
See 3 more Smart Citations