2021
DOI: 10.48550/arxiv.2105.00470
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Feature Decorrelation in Self-Supervised Learning

Abstract: In self-supervised representation learning, a common idea behind most of the state-of-the-art approaches is to enforce the robustness of the representations to predefined augmentations. A potential issue of this idea is the existence of completely collapsed solutions (i.e., constant features), which are typically avoided implicitly by carefully chosen implementation details. In this work, we study a relatively concise framework containing the most common components from recent approaches. We verify the existen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 32 publications
0
11
0
Order By: Relevance
“…Together, these results suggest that each of the three components of LPL is crucial for learning disentangled representations in hierarchical DNNs. Two common causes for failure to disentangle representations are representational collapse and dimensional collapse, which results from excessively high correlations between neurons [49,50]. To disambiguate between these two possibilities, we computed the dimensionality of the output representations and the mean neuronal activity at every layer (Methods).…”
Section: Lpl Disentangles Representations In Deep Hierarchical Networkmentioning
confidence: 99%
“…Together, these results suggest that each of the three components of LPL is crucial for learning disentangled representations in hierarchical DNNs. Two common causes for failure to disentangle representations are representational collapse and dimensional collapse, which results from excessively high correlations between neurons [49,50]. To disambiguate between these two possibilities, we computed the dimensionality of the output representations and the mean neuronal activity at every layer (Methods).…”
Section: Lpl Disentangles Representations In Deep Hierarchical Networkmentioning
confidence: 99%
“…In this section we provide an illustration and some discussions for degenerated (collapsed) solutions, or namely trivial solutions, in self-supervised representation learning. The discussion is inspired by the separation of complete collapse and dimensional collapse proposed in [19]. We show that our method naturally avoids complete collapse through feature-wise normalization, and could prevent/alleviate dimensional collapse through the decorrelation term Eq.…”
Section: B Discussion On Degenerated Solutions In Sslmentioning
confidence: 96%
“…( 19) will lead to trivial solutions: all the embeddings would degenerate to a fixed point on the hypersphere. This phenomenon is called complete collapse [19]. Denote Z A and Z B as two embedding matrix of two views (Z ∈ R N ×D and is row normalized), then in this case Z A Z B ∼ = 1 is an all-one matrix (so as…”
Section: B Discussion On Degenerated Solutions In Sslmentioning
confidence: 99%
See 2 more Smart Citations