Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.33
|View full text |Cite
|
Sign up to set email alerts
|

Meta-Learning for Domain Generalization in Semantic Parsing

Abstract: The importance of building semantic parsers which can be applied to new domains and generate programs unseen at training has long been acknowledged, and datasets testing outof-domain performance are becoming increasingly available. However, little or no attention has been devoted to learning algorithms or objectives which promote domain generalization, with virtually all existing approaches relying on standard supervised learning. In this work, we use a meta-learning framework which targets zero-shot domain ge… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
18
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 37 publications
(19 citation statements)
references
References 39 publications
1
18
0
Order By: Relevance
“…Representation learning also includes two kinds of representative techniques: domain-invariant representation learning [13,25] which performs kernel, adversarial training, explicitly feature alignment between domains, or invariant risk minimization to learn domain-invariant representations and feature disentanglement [12,41] which tries to disentangle the features into domain-shared or domain-specific parts for better generalization. Learning strategy mainly has three kinds of methods: ensemble learning [16] which relies on the power of ensemble to learn a unified and generalized predictive function, meta-learning [2,30] which is based on the learning-to-learn mechanism to learn general knowledge by constructing meta-learning tasks to simulate domain shift, and gradient operation [11,28] which tries to learn generalized representations by directly operating on gradients. For more details, please refer to [33].…”
Section: Domain Generalizationmentioning
confidence: 99%
“…Representation learning also includes two kinds of representative techniques: domain-invariant representation learning [13,25] which performs kernel, adversarial training, explicitly feature alignment between domains, or invariant risk minimization to learn domain-invariant representations and feature disentanglement [12,41] which tries to disentangle the features into domain-shared or domain-specific parts for better generalization. Learning strategy mainly has three kinds of methods: ensemble learning [16] which relies on the power of ensemble to learn a unified and generalized predictive function, meta-learning [2,30] which is based on the learning-to-learn mechanism to learn general knowledge by constructing meta-learning tasks to simulate domain shift, and gradient operation [11,28] which tries to learn generalized representations by directly operating on gradients. For more details, please refer to [33].…”
Section: Domain Generalizationmentioning
confidence: 99%
“…Domain generalisation has been mostly studied in computer vision (Wang et al, 2021b). The main approaches include invariant feature learning (Li et al, 2018;Wang et al, 2020b), data augmentation (Wang et al, 2020a), and meta learning (Balaji et al, 2018;Wang et al, 2021a). Although domain mismatch is a known challenge in NMT (Müller et al, 2020), domain generalisation has just recently drawn attention with the introduction of zeroshot evaluation in WMT2020 Robustness shared task (Specia et al, 2020), but is still under-explored.…”
Section: Related Workmentioning
confidence: 99%
“…There are semantic parsing studies other than code generation that deal with out-of-domain settings. In the Text2SQL task, which generates SQL statements given a natural language intent and table schema as inputs, researchers have developed models that can generalize to table schemas that did not appear during training (Yu et al, 2018;Zhong et al, 2020;Suhr et al, 2020;Wang et al, 2021). Pasupat and Liang (2015) proposed a semantic parsing model for question answering on unknown tables.…”
Section: Zero-shot Semantic Parsingmentioning
confidence: 99%