2018
DOI: 10.48550/arxiv.1810.06221
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Supervised COSMOS Autoencoder: Learning Beyond the Euclidean Loss!

Maneet Singh,
Shruti Nagpal,
Mayank Vatsa
et al.

Abstract: Autoencoders are unsupervised deep learning models used for learning representations. In literature, autoencoders have shown to perform well on a variety of tasks spread across multiple domains, thereby establishing widespread applicability. Typically, an autoencoder is trained to generate a model that minimizes the reconstruction error between the input and the reconstructed output, computed in terms of the Euclidean distance. While this can be useful for applications related to unsupervised reconstruction, i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…Our idea is to learn large decision margins in feature space through enlarging the classifier's output of one class while suppressing those of other classes in an unsupervised way. Different from supervised mutual information [56,19,45,34], our MI loss maximizes mutual information between unlabeled target data X t and classifier's prediction O t inspired by [68,26].…”
Section: Mutual Information Loss For Discriminant Adaptationmentioning
confidence: 99%
See 1 more Smart Citation
“…Our idea is to learn large decision margins in feature space through enlarging the classifier's output of one class while suppressing those of other classes in an unsupervised way. Different from supervised mutual information [56,19,45,34], our MI loss maximizes mutual information between unlabeled target data X t and classifier's prediction O t inspired by [68,26].…”
Section: Mutual Information Loss For Discriminant Adaptationmentioning
confidence: 99%
“…Besides pseudo label based pre-adaptation, a novel mutual information (MI) based adaptation is proposed to further enhance the discriminative ability of the network output, which learns larger decision margins in an unsupervised way. Different from the common supervised losses and supervised MI methods [56,34], MI loss takes advantage of all unlabeled target data, no matter whether they are successfully assigned pseudo-labels or not, in virtue of its unsupervised property.…”
Section: Introductionmentioning
confidence: 99%