2022
DOI: 10.48550/arxiv.2206.06072
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Rank Diminishing in Deep Neural Networks

Abstract: The rank of neural networks measures information flowing across layers. It is an instance of a key structural condition that applies across broad domains of machine learning. In particular, the assumption of low-rank feature representations leads to algorithmic developments in many architectures. For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear. To fill this gap, we perform a rigorous study on the behavior of network rank, focusing particularly on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
1

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(8 citation statements)
references
References 35 publications
0
7
1
Order By: Relevance
“…To the extent that the previous literature has considered latent dimensionality, it has predominantly argued that both biological and artificial vision systems gain computational benefits by representing stimuli in low-dimensional manifolds (Ansuini et al, 2019; Cohen et al, 2020; Lehky et al, 2014). For instance, it has been hypothesized that dimensionality reduction along the visual hierarchy confers robustness to incidental image features (Amsaleg et al, 2017; Feng et al, 2022; I. Fischer & Alemi, 2020; I.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…To the extent that the previous literature has considered latent dimensionality, it has predominantly argued that both biological and artificial vision systems gain computational benefits by representing stimuli in low-dimensional manifolds (Ansuini et al, 2019; Cohen et al, 2020; Lehky et al, 2014). For instance, it has been hypothesized that dimensionality reduction along the visual hierarchy confers robustness to incidental image features (Amsaleg et al, 2017; Feng et al, 2022; I. Fischer & Alemi, 2020; I.…”
Section: Resultsmentioning
confidence: 99%
“…The results, illustrated in Figure 4b, show a striking benefit of high-dimensionality for this task. Even though high-dimensional representations have traditionally been thought to be undesirable for object classification (Chung et al, 2018; Cohen et al, 2020; Feng et al, 2022; I. Fischer & Alemi, 2020; I.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations