2021
DOI: 10.48550/arxiv.2105.03120
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural 3D Scene Compression via Model Compression

Berivan Isik

Abstract: Rendering 3D scenes requires access to arbitrary viewpoints from the scene. Storage of such a 3D scene can be done in two ways; (1) storing 2D images taken from the 3D scene that can reconstruct the scene back through interpolations, or (2) storing a representation of the 3D scene itself that already encodes views from all directions. So far, traditional 3D compression methods have focused on the first type of storage and compressed the original 2D images with image compression techniques. With this approach, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 31 publications
0
7
0
Order By: Relevance
“…Hence the focus of this paper is on compression of Z. We assume that the simple CBN parameterized by θ can be compressed using model compression tools, e.g., (Bird et al, 2021;Isik, 2021), to a few bits per parameter with little loss in performance. Alternatively, we assume that the CBN may be trained to generalize across many point clouds, obviating the need to transmit θ.…”
Section: Latent Vectorsmentioning
confidence: 99%
See 1 more Smart Citation
“…Hence the focus of this paper is on compression of Z. We assume that the simple CBN parameterized by θ can be compressed using model compression tools, e.g., (Bird et al, 2021;Isik, 2021), to a few bits per parameter with little loss in performance. Alternatively, we assume that the CBN may be trained to generalize across many point clouds, obviating the need to transmit θ.…”
Section: Latent Vectorsmentioning
confidence: 99%
“…However, while there has been an explosion of work using CBNs for representing specific objects and scenes (Park et al, 2019a;Mescheder et al, 2019;Mildenhall et al, 2020;Sitzmann et al, 2020;Yu A. et al, 2021;Barron et al, 2021;Hedman et al, 2021;Knodt et al, 2021;Martel et al, 2021;Srinivasan et al, 2021;Takikawa et al, 2021;Zhang et al, 2021), none of that work focuses on compressing those representations. (Two exceptions may be (Bird et al, 2021;Isik, 2021), which simply apply model compression to the CBNs.). Good lossy compression is nontrivial, and must make the optimal trade-off between the fidelity of the reconstruction and the number of bits used in its binary representation.…”
Section: Introductionmentioning
confidence: 99%
“…Hence the focus of this paper is on compression of Z. We assume that the simple CBN parameterized by θ can be compressed using model compression tools, e.g., [13,33], to a few bits per parameter with little loss in performance. Alternatively, we assume that the CBN may be trained to generalize across point clouds, obviating the need to transmit θ.…”
Section: Latent Vectorsmentioning
confidence: 99%
“…Also fortunately, it is likely that 32 bits per floating point parameter is an order of magnitude more than necessary. Prior work has shown that simple model compression can be performed at 8 bits per floating point parameter [10,77,86] or even more aggressively at 1-4 bits per floating point parameter [30,34,35,54,75,85,87] with very low loss in performance, even with coordinate based networks such as NeRF [13,33]. However, since model compression is outside the scope of our work, we simply parameterize our results by the number of bits per floating point parameter.…”
Section: Side Informationmentioning
confidence: 99%
See 1 more Smart Citation