2013
DOI: 10.48550/arxiv.1311.4158
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unsupervised Learning of Invariant Representations in Hierarchical Architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
58
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 22 publications
(58 citation statements)
references
References 0 publications
0
58
0
Order By: Relevance
“…Hierarchical unsupervised architectures have been shown to provide efficient learning strategies [1]. We show that unsupervised learning can optimize an average discriminability by computing sparse features.…”
Section: Introductionmentioning
confidence: 95%
“…Hierarchical unsupervised architectures have been shown to provide efficient learning strategies [1]. We show that unsupervised learning can optimize an average discriminability by computing sparse features.…”
Section: Introductionmentioning
confidence: 95%
“…The earliest attempt at theoretical justification for invariance of which we are aware is [1], which roughly states that enforcing invariance cannot increase the VC dimension of a model. Anselmi et al [2] and Mroueh, Voinea, and Poggio [23] propose heuristic arguments for improved sample complexity of invariant models. Sokolic et al [29] build on the work of Xu and Mannor [34] to obtain a generalisation bound for certain types of classifiers that are invariant to a finite set of transformations, while Sannai and Imaizumi [25] obtain a bound for models that are invariant to finite permutation groups.…”
Section: Related Workmentioning
confidence: 99%
“…The manifold structures may be known from prior knowledge, or could be estimated from data using a variety of manifold learning algorithms [42][43][44][45][46][47] . Based upon knowledge of these structures, some areas of prior research have focused on building invariant representations 48 or constructing invariant metrics 49 . On the other hand, most approaches today rely upon data augmentation by explicitly generating "virtual" examples from these manifolds 50,51 .…”
Section: Introductionmentioning
confidence: 99%