Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 2021
DOI: 10.18653/v1/2021.findings-acl.282
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Bridge Metric Spaces: Few-shot Joint Learning of Intent Detection and Slot Filling

Abstract: In this paper, we investigate few-shot joint learning for dialogue language understanding. Most existing few-shot models learn a single task each time with only a few examples. However, dialogue language understanding contains two closely related tasks, i.e., intent detection and slot filling, and often benefits from jointly learning the two tasks. This calls for new few-shot learning techniques that are able to capture task relations from only a few examples and jointly learn multiple tasks. To achieve this, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 33 publications
1
5
0
Order By: Relevance
“…A graph neural network was also utilized to directly model the interactions. To adaptively model the interaction, Hou et al [35] proposed a similarity-based learning framework, and Cai et al [36] incorporated a slot-intent classifier. Different from our method, these approaches primarily focus on the model structure; researchers have not fully explored the potential of PLMs.…”
Section: Joint Model In Natural Language Understandingmentioning
confidence: 99%
“…A graph neural network was also utilized to directly model the interactions. To adaptively model the interaction, Hou et al [35] proposed a similarity-based learning framework, and Cai et al [36] incorporated a slot-intent classifier. Different from our method, these approaches primarily focus on the model structure; researchers have not fully explored the potential of PLMs.…”
Section: Joint Model In Natural Language Understandingmentioning
confidence: 99%
“…Therefore, it is reasonable to transfer the general semantic representation from source domains to target domains, while it is difficult to transfer the domainspecific knowledge. Despite a few work has noticed the differences between the two parts, they still tried to conduct the transferring as a whole (Hou et al 2021;Liu et al 2021), which may be inefficient.…”
Section: Source Targetmentioning
confidence: 99%
“…• JointProto (Krone, Zhang, and Diab 2020) jointly learns the intent and slot representations by sharing a single BERT encoder on source domains, without fine-tuning on target domains. • ConProm (Hou et al 2021) merges the intent and slot representations into one space and learns the representations by contrastive learning. • ConProm+TR+FT, where FT denotes fine-tuning models, and TR denotes the transition rules of BIO annotation, which ban illegal slot predictions from left to right during target-domain evaluation.…”
Section: Baselines and Evaluation Metricsmentioning
confidence: 99%
“…. Hou et al (2021) improves few-shot slot tagging performance by jointly learning it with intent detection.…”
mentioning
confidence: 99%