2021
DOI: 10.48550/arxiv.2109.15089
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks

Mufeng Tang,
Yibo Yang,
Yali Amit

Abstract: We develop biologically plausible training mechanisms for self-supervised learning (SSL) in deep networks. SSL, with a contrastive loss, is more natural as it does not require labelled data and its robustness to perturbations yields more adaptable embeddings. Moreover the perturbation of data required to create positive pairs for SSL is easily produced in a natural environment by observing objects in motion and with variable lighting over time. We propose a contrastive hinge based loss whose error involves sim… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…Doing so alleviates the need for backpropagation of errors into hidden layers and, as a result, allows formulating the weight updates as local learning rules. Such layer-local learning is similar to works combining a contrastive predictive coding (CPC) objective with greedy layer-wise training thereby reducing or avoiding the need for backpropagation [46, 56]. In a similar vein, recent work by Illing et al [24] showed that such greedy layer-wise learning is directly linked to local Hebbian learning rules.…”
Section: Discussionmentioning
confidence: 98%
“…Doing so alleviates the need for backpropagation of errors into hidden layers and, as a result, allows formulating the weight updates as local learning rules. Such layer-local learning is similar to works combining a contrastive predictive coding (CPC) objective with greedy layer-wise training thereby reducing or avoiding the need for backpropagation [46, 56]. In a similar vein, recent work by Illing et al [24] showed that such greedy layer-wise learning is directly linked to local Hebbian learning rules.…”
Section: Discussionmentioning
confidence: 98%