Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics 2021
DOI: 10.18653/v1/2021.starsem-1.16
|View full text |Cite
|
Sign up to set email alerts
|

One Semantic Parser to Parse Them All: Sequence to Sequence Multi-Task Learning on Semantic Parsing Datasets

Abstract: Semantic parsers map natural language utterances to meaning representations. The lack of a single standard for meaning representations led to the creation of a plethora of semantic parsing datasets. To unify different datasets and train a single model for them, we investigate the use of Multi-Task Learning (MTL) architectures. We experiment with five datasets (GEOQUERY, NLMAPS, TOP, OVERNIGHT, AMR). We find that an MTL architecture that shares the entire network across datasets yields competitive or better par… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 45 publications
0
4
0
Order By: Relevance
“…1) How to choose auxiliary task? The task selection is important since loosely related tasks may even impede the AMR parsing according to Damonte and Monti (2021). However, in literature there are no principles or consensus on how to choose the proper auxiliary tasks for AMR parsing.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…1) How to choose auxiliary task? The task selection is important since loosely related tasks may even impede the AMR parsing according to Damonte and Monti (2021). However, in literature there are no principles or consensus on how to choose the proper auxiliary tasks for AMR parsing.…”
Section: Introductionmentioning
confidence: 99%
“…For example, Machine Translation generates text sequence while Dependency Parsing (DP) and Semantic Role Labeling (SRL) produces dependency trees and semantic role forests respectively as shown in Figure 1. Prior studies (Xu et al, 2020;Wu et al, 2021;Damonte and Monti, 2021) do not attach particular importance to the gap, which might lead the auxiliary tasks to outlier-task Cai et al, 2017) in the Multitask Learning, deteriorating the performance of AMR parsing. 3) How to introduce auxiliary tasks more effectively?…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…An example of this category would be a test that investigates how well one pretrained model generalises with respect to an o.o.d. finetuning train-test split(Damonte and Monti, 2021;Kavumba et al, 2022;Ludwig et al, 2022). The parts of the modelling pipeline that studies with a finetune train-test locus can evaluate are the same as studies with a train-test locus, although studies that investigate the generalisation abilities of a single finetuned model instance are rare.…”
mentioning
confidence: 99%