Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 2021
DOI: 10.18653/v1/2021.findings-acl.97
|View full text |Cite
|
Sign up to set email alerts
|

Learning Algebraic Recombination for Compositional Generalization

Abstract: Neural sequence models exhibit limited compositional generalization ability in semantic parsing tasks. Compositional generalization requires algebraic recombination, i.e., dynamically recombining structured expressions in a recursive manner. However, most previous studies mainly concentrate on recombining lexical units, which is an important but not sufficient part of algebraic recombination. In this paper, we propose LEAR, an end-toend neural model to learn algebraic recombination for compositional generaliza… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(37 citation statements)
references
References 27 publications
0
21
0
Order By: Relevance
“…For SCAN, NQG-T5 is one of several specialized models that achieves 100% accuracy across multiple splits (Chen et al, 2020;Nye et al, 2020;Herzig and Berant, 2021). For COGS, we show results from LeAR (Liu et al, 2021), the previously reported state-of-the-art on COGS. 18 We also report new results for NQG-T5 on COGS.…”
Section: Baselinesmentioning
confidence: 72%
See 1 more Smart Citation
“…For SCAN, NQG-T5 is one of several specialized models that achieves 100% accuracy across multiple splits (Chen et al, 2020;Nye et al, 2020;Herzig and Berant, 2021). For COGS, we show results from LeAR (Liu et al, 2021), the previously reported state-of-the-art on COGS. 18 We also report new results for NQG-T5 on COGS.…”
Section: Baselinesmentioning
confidence: 72%
“…When we use CSL to generate additional training data for T5 (T5+CSL-Aug.), the performance of T5 improves to nearly solving T5-3B. 18 We do not show LeAR results for SCAN and GeoQuery as Liu et al (2021) did not report results for SCAN and reported GeoQuery results using a different template split and a different evaluation metric.…”
Section: Resultsmentioning
confidence: 98%
“…For SCAN, NQG-T5 Shaw et al ( 2021) is one of several specialized models that achieves 100% accuracy across multiple splits Nye et al, 2020;. We also report new results for NQG-T5 on COGS, and show results from LeAR (Liu et al, 2021), the previously reported state-of-the-art on COGS. For these synthetic datasets, the induced grammars have high coverage, making the CSL model highly effective for data augmentation.…”
Section: Resultsmentioning
confidence: 51%
“…On structural generalization in particular, the accuracy of all these models is below 10%, with the exception of Zheng and Lapata (2021), who achieve 39% on PP recursion. By contrast, the compositional model of Liu et al (2021) and the model of Qiu et al (2022), which uses compositional data augmentation, achieve accuracies upwards of 98% on the full generalization set.…”
Section: Compositional Generalization In Cogsmentioning
confidence: 94%
“…For instance, Shaw et al (2021) describe a synchronous grammar induction approach that achieves perfect accuracy on SCAN (Lake and Baroni, 2018), but has very low accuracy on corpora of naturally occurring text such as GeoQuery (Zelle and Mooney, 1996) and Spider (Yu et al, 2018). Similarly, the compositional LeAR parser (Liu et al, 2021) solves COGS with near-perfect accuracy and performs very well on other synthetic datasets, but has not been evaluated on corpora of naturally occurring text. This points to a fundamental tension between broad-coverage semantic parsing on natural text and the ability to generalize compositionally from structurally limited synthetic training sets (see also Shaw et al, 2021).…”
Section: Introductionmentioning
confidence: 99%