2021
DOI: 10.48550/arxiv.2112.01917
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Structured Dictionary Perspective on Implicit Neural Representations

Abstract: Propelled by new designs that permit to circumvent the spectral bias, implicit neural representations (INRs) have recently emerged as a promising alternative to classical discretized representations of signals. Nevertheless, despite their practical success, we still lack a proper theoretical characterization of how INRs represent signals. In this work, we aim to fill this gap, and we propose a novel unified perspective to theoretically analyse INRs. Leveraging results from harmonic analysis and deep learning t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…Network Architecture For network architecture, we implement a fully-connected MLP with 4 activated linear layers with 256 hidden units each and 1 output linear layer. For the ReLU-based implementation, we follow the conclusion in [43,36] and apply positional encoding (P.E.) to the inputs (additional 20 channels).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Network Architecture For network architecture, we implement a fully-connected MLP with 4 activated linear layers with 256 hidden units each and 1 output linear layer. For the ReLU-based implementation, we follow the conclusion in [43,36] and apply positional encoding (P.E.) to the inputs (additional 20 channels).…”
Section: Methodsmentioning
confidence: 99%
“…The angular velocity of the sinusoidal function and weight initialization method are both the same as image regression task. Taking the conclusion in [36,43] into consideration, we do two parallel experiments with and without positional encoding for all activation functions except for sine.…”
Section: Different Activation Functionsmentioning
confidence: 99%