2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01072
|View full text |Cite
|
Sign up to set email alerts
|

Neighborhood Contrastive Learning for Novel Class Discovery

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
75
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 94 publications
(75 citation statements)
references
References 10 publications
0
75
0
Order By: Relevance
“…OpenMix [50] mixes the labelled and unlabelled data to avoid the model from over-fitting for NCD. NCL [49] extracts and aggregates the pairwise pseudo-labels for the unlabelled data with contrastive learning and generates hard negatives by mixing the labelled and unlabelled data in the feature space for NCD. Jia et al [21] propose an end-to-end NCD method for singleand multi-modal data with contrastive learning and winnertakes-all hashing.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…OpenMix [50] mixes the labelled and unlabelled data to avoid the model from over-fitting for NCD. NCL [49] extracts and aggregates the pairwise pseudo-labels for the unlabelled data with contrastive learning and generates hard negatives by mixing the labelled and unlabelled data in the feature space for NCD. Jia et al [21] propose an end-to-end NCD method for singleand multi-modal data with contrastive learning and winnertakes-all hashing.…”
Section: Related Workmentioning
confidence: 99%
“…Meanwhile, self-supervised contrastive learning has been widely used as pre-training to achieve robust representations in NCD [21,49]. Furthermore, when combined with vision transformers, it generates models which are good nearest neighbour classifiers [5].…”
Section: Our Approachmentioning
confidence: 99%
“…In this section, we explore whether it is necessary to use the function form based on InfoNCE loss to approximate the probability to the one-hot form. Therefore, we consider binary cross entropy loss (BCE) which is used in [79]. For convenience, we redefine the probabilities p i and pi as p 0 i and p 1 i .…”
Section: The Importance Of Infonce Lossmentioning
confidence: 99%
“…Given that, we propose Domain-Specific Contrastive Learning (DSCL) to fully mine intra-domain knowledge by only selecting the positive and negative samples from the query's domain for contrastive learning. Furthermore, inspired by the recent negative mining methods (Kalantidis et al 2020;Zhong et al 2021) that use interpolation in the latent space to synthesize more negative samples, we propose Second-Order Nearest Interpolation (SONI) to obtain additional hard negative samples for the query from the target domain. To ensure the synthetic negative samples are reliable and informative, SONI selects a set of nearest negative centroids and then uses each centroid as an anchor to interpolate with another nearest negative centroid that is nearest to it but has a different pseudo label.…”
Section: Introductionmentioning
confidence: 99%