Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021
DOI: 10.24963/ijcai.2021/540
|View full text |Cite
|
Sign up to set email alerts
|

Learning Class-Transductive Intent Representations for Zero-shot Intent Detection

Abstract: Zero-shot intent detection (ZSID) aims to deal with the continuously emerging intents without annotated training data. However, existing ZSID systems suffer from two limitations: 1) They are not good at modeling the relationship between seen and unseen intents. 2) They cannot effectively recognize unseen intents under the generalized intent detection (GZSID) setting. A critical problem behind these limitations is that the representations of unseen intents cannot be learned in the training stage. To address thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(12 citation statements)
references
References 0 publications
0
12
0
Order By: Relevance
“…Based on the results, we can make the following observations. [28] 0.7396 0.7206 0.4452 0.4245 CDSSM [4] 0.7588 0.7580 0.4308 0.3765 ZSDNN [15] 0.7165 0.7116 0.4615 0.3897 IntentCapsNet [34] 0.7752 0.7750 0.4864 0.4227 ReCapsNet [19] 0.7996 0.7980 0.5418 0.4769 RL Self-training [37] 0.8253 0.8726 0.7124 0.6587 CTIR [26] 0 • In the standard zero-shot intent classification task, our model outperforms other strong baselines on both SNIPS and SMP, which validates the effectiveness of our model in dealing with zero-shot intent classification. In addition, we can also observe that the pre-trained model BERT can improve our model performance effectively.…”
Section: Results Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…Based on the results, we can make the following observations. [28] 0.7396 0.7206 0.4452 0.4245 CDSSM [4] 0.7588 0.7580 0.4308 0.3765 ZSDNN [15] 0.7165 0.7116 0.4615 0.3897 IntentCapsNet [34] 0.7752 0.7750 0.4864 0.4227 ReCapsNet [19] 0.7996 0.7980 0.5418 0.4769 RL Self-training [37] 0.8253 0.8726 0.7124 0.6587 CTIR [26] 0 • In the standard zero-shot intent classification task, our model outperforms other strong baselines on both SNIPS and SMP, which validates the effectiveness of our model in dealing with zero-shot intent classification. In addition, we can also observe that the pre-trained model BERT can improve our model performance effectively.…”
Section: Results Analysismentioning
confidence: 99%
“…We compare the proposed model with the following state-of-theart baselines: DeViSE [9], CMT [28], CDSSM [4], ZSDNN [15], In-tentCapsNet [34], ReCapsNet [19], RL Self-training [37], CTIR [26] and SEG [35]. SEG is a plug-and-play unknown intent detection method, and it integrates with ReCapsNet in the original paper.…”
Section: Baselinesmentioning
confidence: 99%
See 1 more Smart Citation
“…We conduct our experiments across four varied news and review classification datasets: 20News, Amazon, HuffPost, and Reuters, focusing on few-shot tasks in accordance with the experimental configuration outlined in (Chen et al 2022). Additionally, we follow (Si et al 2021) to evaluate our method on intent classification datasets: SNIPS (Coucke et al 2018) and CLINC (Larson et al 2019) on zero-shot tasks. Note that the training and validation classes are only used in baselines.…”
Section: Experiments Datasetsmentioning
confidence: 99%
“…It comprises 22,500 in-scope queries. We follow (Si et al 2021) and use a subset containing 9000 queries.…”
Section: Experiments Datasetsmentioning
confidence: 99%