2020
DOI: 10.48550/arxiv.2012.02189
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learned Initializations for Optimizing Coordinate-Based Neural Representations

Abstract: Coordinate-based neural representations have shown significant promise as an alternative to discrete, arraybased representations for complex low dimensional signals. However, optimizing a coordinate-based network from randomly initialized weights for each new signal is inefficient. We propose applying standard meta-learning algorithms to learn the initial weight parameters for these fully-connected networks based on the underlying class of signals being represented (e.g., images of faces or 3D models of chairs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 26 publications
0
7
0
Order By: Relevance
“…Recently, Sitzmann et al (2020a); Tancik et al (2020a) have shown that applying MAML (Finn et al, 2017) to INRs can reduce fitting at test time to just a few gradient steps. Instead of minimizing L(θ, d) directly via gradient descent from a random initialization, we can meta-learn an initialization θ * such that minimizing L(θ, d) can be done in a few gradient steps.…”
Section: Meta-learning Modulationsmentioning
confidence: 99%
“…Recently, Sitzmann et al (2020a); Tancik et al (2020a) have shown that applying MAML (Finn et al, 2017) to INRs can reduce fitting at test time to just a few gradient steps. Instead of minimizing L(θ, d) directly via gradient descent from a random initialization, we can meta-learn an initialization θ * such that minimizing L(θ, d) can be done in a few gradient steps.…”
Section: Meta-learning Modulationsmentioning
confidence: 99%
“…However, paying a significant computational cost upfront to compress content for delivery to many receivers is a standard practice in the setting of one-to-many media distribution, e.g., at Netflix [1]. Nevertheless, this limitation could likely be sidestepped with meta-learning [29,33] or amortized inference approaches. Further, at decoding time, we are required to evaluate the network at every pixel location to decode the full image.…”
Section: Scope Limitations and Future Workmentioning
confidence: 99%
“…Recent work in generative modeling of implicit representations [10] suggests that learning a distribution over the function weights could translate to significant compression gains for our approach. In addition, exploring meta-learning or other amortization approaches for faster encoding could be an important direction for future work [29,33]. Refining the architectures of the functions representing the images (through neural architecture search or pruning for example) is another promising avenue.…”
Section: Scope Limitations and Future Workmentioning
confidence: 99%
“…network to train on a novel domain, in this case a different scene. Other methods of handling dynamic scenes include meta-learning, in which we would optimize a model initialization that can quickly adapt to any general scene; similar meta-learning formulations have already been proposed [Tancik et al 2020a]. Other directions include few-shot learning, which would entail fine-tuning to a novel scene configuration given a very limited sample budget; in the context of our problem, we may design specialized sampling methods targeted at regions with changed geometry.…”
Section: Limitations and Future Workmentioning
confidence: 99%