2021
DOI: 10.48550/arxiv.2104.08894
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Intrinsic Dimension of Images and Its Impact on Learning

Phillip Pope,
Chen Zhu,
Ahmed Abdelkader
et al.

Abstract: It is widely believed that natural image data exhibits low-dimensional structure despite the high dimensionality of conventional pixel representations. This idea underlies a common intuition for the remarkable success of deep learning in computer vision. In this work, we apply dimension estimation tools to popular datasets and investigate the role of low-dimensional structure in deep learning. We find that common natural image datasets indeed have very low intrinsic dimension relative to the high number of pix… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(29 citation statements)
references
References 21 publications
0
28
1
Order By: Relevance
“…Notably, these results indicate that a small set of parameters govern the Covid-19 dynamics, which has important implications for practitioners seeking to model these dynamics or apply dimensionality reduction techniques. For example, Pope et al (2020) identify that lower IDs lower the sample complexity of learning, enabling more accessible learning for neural networks and better model generalisation from training to test data.…”
Section: Resultsmentioning
confidence: 99%
“…Notably, these results indicate that a small set of parameters govern the Covid-19 dynamics, which has important implications for practitioners seeking to model these dynamics or apply dimensionality reduction techniques. For example, Pope et al (2020) identify that lower IDs lower the sample complexity of learning, enabling more accessible learning for neural networks and better model generalisation from training to test data.…”
Section: Resultsmentioning
confidence: 99%
“…Fischer & Alemi, 2020; I. S. Fischer, 2020; Gallego et al, 2017; Gao & Ganguli, 2015; Gao et al, 2017; Gong et al, 2019; Kingma & Welling, 2013; Lee et al, 2021; Lehky et al, 2014; Ma et al, 2018; Nieh et al, 2021; Op de Beeck et al, 2001; Pope et al, 2021; Recanatesi et al, 2019; Saxena & Cunningham, 2019; Tishby & Zaslavsky, 2015; Zhu et al, 2018). Furthermore, our findings suggest that the design factors that have been a major focus of previous work, such as architecture and training, are of secondary importance and are best understood in the context of how they influence representational dimensionality.…”
Section: Discussionmentioning
confidence: 99%
“…Fischer & Alemi, 2020; I. S. Fischer, 2020; Gong, Boddeti, & Jain, 2019; Kingma & Welling, 2013; Lee, Arnab, Guadarrama, Canny, & Fischer, 2021; Ma et al, 2018; Pope, Zhu, Abdelkader, Goldblum, & Goldstein, 2021; Recanatesi et al, 2019; Tishby & Zaslavsky, 2015; Zhu et al, 2018). Similar arguments have been made for the benefits of low-dimensional manifolds in the sensory, motor, and cognitive systems of the brain (Churchland et al, 2012; Gallego, Perich, Miller, & Solla, 2017; Gao & Ganguli, 2015; Lehky, Kiani, Esteky, & Tanaka, 2014; Nieh et al, 2021; Op de Beeck, Wagemans, & Vogels, 2001; Saxena & Cunningham, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…It is thought that in generative modeling, the generator's intrinsic dimensionality should ideally match that of the real image manifold [33,45]. While the latter is hard to calculate [14], it has been estimated to values as low as 20-50 [16,42]. Yet, training remains effective with our seemingly excessive overparameterization.…”
Section: Discussionmentioning
confidence: 99%