2016
DOI: 10.1109/tpami.2015.2496141
|View full text |Cite
|
Sign up to set email alerts
|

Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks

Abstract: Abstract-Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
1,064
0
2

Year Published

2016
2016
2021
2021

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 1,065 publications
(1,072 citation statements)
references
References 27 publications
6
1,064
0
2
Order By: Relevance
“…Notably, the discriminator has many less feature maps (512 in the highest layer) compared to K-means based techniques, but does result in a larger total feature vector size due to the many layers of 4 × 4 spatial locations. The performance of DCGANs is still less than that of Exemplar CNNs (Dosovitskiy et al, 2015), a technique which trains normal discriminative CNNs in an unsupervised fashion to differentiate between specifically chosen, aggressively augmented, exemplar samples from the source dataset. Further improvements could be made by finetuning the discriminator's representations, but we leave this for future work.…”
Section: Classifying Cifar-10 Using Gans As a Feature Extractormentioning
confidence: 99%
“…Notably, the discriminator has many less feature maps (512 in the highest layer) compared to K-means based techniques, but does result in a larger total feature vector size due to the many layers of 4 × 4 spatial locations. The performance of DCGANs is still less than that of Exemplar CNNs (Dosovitskiy et al, 2015), a technique which trains normal discriminative CNNs in an unsupervised fashion to differentiate between specifically chosen, aggressively augmented, exemplar samples from the source dataset. Further improvements could be made by finetuning the discriminator's representations, but we leave this for future work.…”
Section: Classifying Cifar-10 Using Gans As a Feature Extractormentioning
confidence: 99%
“…Weakly-Supervised Feature Learning For the purpose of object recognition, Dosovitskiy et al [11] trained the network to discriminate between a set of surrogate classes formed by applying various transformations. For object matching, Lin et al [28] proposed an unsupervised learning to learn a compact binary descriptor by leveraging an iterative training scheme.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, since the pre-learned sampling patterns used in the CSS layers are fixed over an entire image, they may be sensitive to nonrigid deformation as described in [26]. To address this, we perform the max-pooling operation within a spatial window N i centered at a pixel i after the non-linear gating: (11) where W k λ is a learnable parameter for scale k. The maxpooling layer provides an effect similar to using pixelvarying sampling patterns, providing robustness to nonrigid deformation. The descriptor for each pixel then undergoes L 2 normalization.…”
Section: Non-linear Gating and Max-pooling Layermentioning
confidence: 99%
“…The paper [1] discusses training the Conventional Neural Network using only unlabelled data to discriminate between a set of surrogate classes. Each surrogate classes are formed by applying a variety of transformations to a randomly sampled seed image patch.…”
Section: Literature Surveymentioning
confidence: 99%