2018
DOI: 10.1007/978-3-030-03493-1_87
|View full text |Cite
|
Sign up to set email alerts
|

Towards Complex Features: Competitive Receptive Fields in Unsupervised Deep Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…In this work we introduced the modular building blocks of CSNNs to learn representations in an unsupervised manner [15] 83.35 90.59 [29] 78.57 (SVM) -----------K-means (Triangle) (best single model accuracy we know) [30] 79.60 (SVM) Step 0…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In this work we introduced the modular building blocks of CSNNs to learn representations in an unsupervised manner [15] 83.35 90.59 [29] 78.57 (SVM) -----------K-means (Triangle) (best single model accuracy we know) [30] 79.60 (SVM) Step 0…”
Section: Discussionmentioning
confidence: 99%
“…Chan et al [28] used Principal Component Analysis (PCA) in a convolutional manner to learn two-stage filter banks, which are followed by binary hashing and block histograms to compress the representation. In a similar way Hankins et al [29] used SOMs and additionally introduced pooling layers to shrink SOMactivations along the feature dimension. Coates et al [30] studied hyperparameter choices and preprocessing strategies on several single-layer networks learned on convolutional patches, including sparse auto-encoders, sparse restricted Boltzmann machines, K-means clustering, and Gaussian mixture models and showed, e.g., general performance gains when the representation size is increased.…”
Section: B Unsupervised Backpropagation-free Representation Learningmentioning
confidence: 99%
“…The continuous advancement of deep network architectures in image classification generates networks adaptable for face recognition. The study of Dong et al [156], Bruna and Mallat [157], and Hankins et al [158] are some image classification networks with prospect for face classification. Hence, face recognition will remain an active research striving for sophisticated frameworks.…”
Section: Basic Challenges For Face Recognitionmentioning
confidence: 99%
“…As a result, simpler models are achieved, and hence, the interpretability is better. Deep learning is nowadays on the rise (Hankins, Peng, & Yin, 2018) and could also be benefited from the data selection step.…”
Section: Applying a Data Preprocessing Phasementioning
confidence: 99%