2018
DOI: 10.48550/arxiv.1812.07627
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Clustering-Oriented Representation Learning with Attractive-Repulsive Loss

Abstract: The standard loss function used to train neural network classifiers, categorical cross-entropy (CCE), seeks to maximize accuracy on the training data; building useful representations is not a necessary byproduct of this objective. In this work, we propose clustering-oriented representation learning (COREL) as an alternative to CCE in the context of a generalized attractive-repulsive loss framework. COREL has the consequence of building latent representations that collectively exhibit the quality of natural clu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 4 publications
0
1
0
Order By: Relevance
“…The α parameter of the modified MobileNetV2 was set to 1.0. As the loss function, a pixelwise adaptation of the COREL-loss [35] with a γ value of 0.5 was chosen. The U-Net models were trained for up to 5000 steps with a batch size of 25, 5 images per class, using the Adam optimizer and a learning rate of 2 × 10 −4 .…”
Section: Trainingmentioning
confidence: 99%
“…The α parameter of the modified MobileNetV2 was set to 1.0. As the loss function, a pixelwise adaptation of the COREL-loss [35] with a γ value of 0.5 was chosen. The U-Net models were trained for up to 5000 steps with a batch size of 25, 5 images per class, using the Adam optimizer and a learning rate of 2 × 10 −4 .…”
Section: Trainingmentioning
confidence: 99%