2022
DOI: 10.48550/arxiv.2205.12914
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

New Intent Discovery with Pre-training and Contrastive Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(15 citation statements)
references
References 0 publications
0
15
0
Order By: Relevance
“…Most of current methods are based on the pre-training and fine-tuning paradigm to transfer knowledge implicitly by initializing model parameters (Zhang et al 2021a). For example, Vaze et al (2022) proposed to use contrastive learning to pre-train their model, and Zhang et al (2022) combined supervised learning and masked language modeling to initialize their model. However, we think this paradigm is sub-optimal for GCD because models pre-trained on labeled data tend to be biased towards known categories.…”
Section: Transfer Learningmentioning
confidence: 99%
“…Most of current methods are based on the pre-training and fine-tuning paradigm to transfer knowledge implicitly by initializing model parameters (Zhang et al 2021a). For example, Vaze et al (2022) proposed to use contrastive learning to pre-train their model, and Zhang et al (2022) combined supervised learning and masked language modeling to initialize their model. However, we think this paradigm is sub-optimal for GCD because models pre-trained on labeled data tend to be biased towards known categories.…”
Section: Transfer Learningmentioning
confidence: 99%
“…However, these methods only focus on the scenario where known and novel categories are of the same granularity. To discover fine-grained categories, a novel task called Fine-grained Category Discovery under Coarsegrained supervision (FCDC) was proposed by An et al (2022a). They also proposed a weighted selfcontrastive strategy to acquire fine-grained knowledge.…”
Section: Novel Category Discoverymentioning
confidence: 99%
“…Li et al (2020) proposed to utilize prototypes learned by clustering as their positive keys. Furthermore, An et al (2022a) 3 Method…”
Section: Contrastive Learningmentioning
confidence: 99%
See 2 more Smart Citations