2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01863
|View full text |Cite
|
Sign up to set email alerts
|

A Structured Dictionary Perspective on Implicit Neural Representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(7 citation statements)
references
References 27 publications
0
7
0
Order By: Relevance
“…Fathony et al propose MFN [7], which employs a Hadamard product between linear layers and nonlinear activation functions. In [62], Yüce et al prove that FFN [47], SIREN [42] and MFN [7] have the same expressive power as a structured signal dictionary. 3D Representations for View Synthesis.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Fathony et al propose MFN [7], which employs a Hadamard product between linear layers and nonlinear activation functions. In [62], Yüce et al prove that FFN [47], SIREN [42] and MFN [7] have the same expressive power as a structured signal dictionary. 3D Representations for View Synthesis.…”
Section: Related Workmentioning
confidence: 99%
“…Radiance mapping, also known as radiance field, is a type of Implicit Neural Representation (INR). There have been some studies [2,7,32,42,46,47,62] on the expressive power and inductive bias of INRs. The standard Multi-layer Perceptrons (MLPs) with ReLU activation function are well known for the strong spectral bias towards reconstructing low frequency signals [32].…”
Section: Introductionmentioning
confidence: 99%
“…As stated, we seek an INR that fits visual signals well but fits noise poorly in comparison. Inspired by [53], who proposed to compare eigenfunctions of the empirical neural tangent kernel (NTK) [22] of INRs to understand their approximation properties, we compare the fitting of noisy natural images using NTK gradient flow. The NTK gradient flow of INRs accurately captures the behavior of early training of neural networks, and so in tasks such as denoising where we regularize via early stopping, the early training behavior determines the implicit bias.…”
Section: Implicit Bias Of Wirementioning
confidence: 99%
“…To achieve this, we take inspiration from harmonic analysis and reconsider the nonlinear activation function used in the MLP. Recent work has shown that an INR can be interpreted as a structured signal representation dictionary [53], where the activation nonlinearity dictates the atoms of the dictionary. For example, the sine activation creates a pseudo-Fourier transform representation of the signal that is maximally concentrated in the frequency domain [53].…”
Section: Introductionmentioning
confidence: 99%
“…SIREN depends on periodic activation functions, i.e., sinusoidal activations, to continuously represent the signals with fine details. Both FFN and SIREN are efficient, and some work [19,20] have proven that they are equivalent to each other. INRs have been adopted for many computer vision tasks, including neural radiance fields for novel view synthesizing [21], image generation [22], unconditional video generation [23] and video interpolation [24].…”
Section: Introductionmentioning
confidence: 99%