2022
DOI: 10.48550/arxiv.2201.12904
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

COIN++: Neural Compression Across Modalities

Abstract: Neural compression algorithms are typically based on autoencoders that require specialized encoder and decoder architectures for different data modalities. In this paper, we propose COIN++, a neural compression framework that seamlessly handles a wide range of data modalities. Our approach is based on converting data to implicit neural representations, i.e. neural functions that map coordinates (such as pixel locations) to features (such as RGB values). Then, instead of storing the weights of the implicit neur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 41 publications
(57 reference statements)
0
4
0
Order By: Relevance
“…Further speedups for Neural Collages compressors could similarly be found via tracking during training. We note concurrent work on an improved version of COIN (Dupont et al, 2022) that is trained in parallel on patches, similarly to Neural Collage compressors.…”
Section: Related Work and Discussionmentioning
confidence: 99%
“…Further speedups for Neural Collages compressors could similarly be found via tracking during training. We note concurrent work on an improved version of COIN (Dupont et al, 2022) that is trained in parallel on patches, similarly to Neural Collage compressors.…”
Section: Related Work and Discussionmentioning
confidence: 99%
“…is attributed to its significant impact on climate prediction, mitigation, and adaptation. Employing neural networks for weather and climate data tasks has shown impressive results across a variety of tasks, e.g., super-resolution (Wang et al, 2021;Yang et al, 2022), temporal modeling (Wang et al, 2018;Stengel et al, 2020), and compression (Dupont et al, 2022;Huang & Hoefler, 2023). Neural network architectures like spherical convolutional neural networks (Cohen et al, 2018) and spherical Fourier neural operators (Bonev et al, 2023) have been tailored for this domain.…”
Section: Related Workmentioning
confidence: 99%
“…Coordinate networks [4] (also termed as implicit neural representation or neural fields) are gradually replacing traditional discrete representations in computer vision and graphics. Different from classical matrix-based discrete representation, coordinate networks focus on learning a neural mapping function with low-dimensional coordinates inputs and the corresponding signal values outputs, and have demonstrated the advantages of continuous querying and memory-efficient footprint in various signal representation tasks, such as images [5], [6], [7], scenes [24], [27], [30], [35] and videos [21], [22]. Additionally, coordinate networks could be seamlessly combined with different differentiable physical processes, opening a new way for solving various inverse problems, especially the domain-specific tasks where large-scale labelled datasets are unavailable, such as the shape representation [23], [25], [28], [29], [36], computed tomography reconstruction [26], [31], [32], [33], [34] and inverse rendering for novel view synthesis [37], [38], [41], [42], [100].…”
Section: Coordinate Networkmentioning
confidence: 99%