2021
DOI: 10.48550/arxiv.2104.01677
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A contrastive rule for meta-learning

Abstract: Meta-learning algorithms leverage regularities that are present on a set of tasks to speed up and improve the performance of a subsidiary learning process. Recent work on deep neural networks has shown that prior gradient-based learning of meta-parameters can greatly improve the efficiency of subsequent learning. Here, we present a biologically plausible meta-learning algorithm based on equilibrium propagation. Instead of explicitly differentiating the learning process, our contrastive meta-learning rule estim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
12
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(12 citation statements)
references
References 26 publications
0
12
0
Order By: Relevance
“…We apply our principle to learn equilibrium recurrent neural networks with a variety of architectures, from laterally-connected networks of leaky integrator neurons to deep equilibrium models [8][9][10], a recent family of high-performance models which repeat until convergence a sequence of more complex computations. To further demonstrate the generality of our principle, we then consider a recently studied meta-learning problem where the goal is to change the internal state of a complex synapse such that future learning performance is improved [30]. We find that our single-phase local learning rules yield highly competitive performance in both application domains.…”
Section: Introductionmentioning
confidence: 94%
See 2 more Smart Citations
“…We apply our principle to learn equilibrium recurrent neural networks with a variety of architectures, from laterally-connected networks of leaky integrator neurons to deep equilibrium models [8][9][10], a recent family of high-performance models which repeat until convergence a sequence of more complex computations. To further demonstrate the generality of our principle, we then consider a recently studied meta-learning problem where the goal is to change the internal state of a complex synapse such that future learning performance is improved [30]. We find that our single-phase local learning rules yield highly competitive performance in both application domains.…”
Section: Introductionmentioning
confidence: 94%
“…A seminal result shows that as β → 0 the gradient ∇ θ L(φ * ) of the original objective function is recovered [24,39]. However, this update is known to be sensitive to noise as the value of f can be very small at the weakly-nudged equilibrium [30]. Our least-control principle works at the opposite perfect control end of the spectrum (β → ∞), and it is therefore more resistant to noise.…”
Section: The Least-control Principle As Constrained Energy Minimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, we first develop a low-complexity offline solution that obtains hyperparameters and predictors in closed form. Then, an online algorithm is proposed based on gradient descent and equilibrium propagation (EP) [7], [8]. Previous applications of meta-learning to communication systems include demodulation [9], [10], channel equalization [11], encoding/decoding [12], [13], [14], MIMO detection [15], beamforming [16], [17], and resource allocation [18].…”
Section: Introductionmentioning
confidence: 99%
“…This growth rate was much faster than that of technology scaling, and outweighed the efforts to reduce the network computational footprint [16]. In order to improve the ability of ANN-based AI to scale, diversify, and generalize from limited data while avoiding catastrophic forgetting, meta-learning approaches are investigated [17]- [22]. These approaches aim at building systems that are tailored to their environment and can quickly adapt once deployed, just as evolution shapes the degrees of versatility and online adaptation of biological brains [23].…”
mentioning
confidence: 99%