2022
DOI: 10.48550/arxiv.2201.05989
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Instant Neural Graphics Primitives with a Multiresolution Hash Encoding

Thomas Müller,
Alex Evans,
Christoph Schied
et al.

Abstract: parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920×1080.CCS Concepts: • Computing methodologies → Massively parallel algorithms; Vector / streaming algorithms; Neural networks.

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
180
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 84 publications
(181 citation statements)
references
References 20 publications
1
180
0
Order By: Relevance
“…Several NeRF caching techniques [20,25,72] or a sparse voxel grid [36] could be used to enable real-time Block-NeRF rendering. Similarly, multiple concurrent works have demonstrated techniques to speed up training of NeRF style representations by multiple orders of magnitude [43,60,71].…”
Section: Limitations and Future Workmentioning
confidence: 99%
“…Several NeRF caching techniques [20,25,72] or a sparse voxel grid [36] could be used to enable real-time Block-NeRF rendering. Similarly, multiple concurrent works have demonstrated techniques to speed up training of NeRF style representations by multiple orders of magnitude [43,60,71].…”
Section: Limitations and Future Workmentioning
confidence: 99%
“…Accelerating NeRF.. There are many existing work to accelerate NeRF [Lindell et al 2020;Liu et al 2020;Müller et al 2022;Reiser et al 2021;Yu et al 2021a,b]. [Liu et al 2020] uses a sparse octree representation with a set of voxel-bounded implicit fields and achieves 10 times faster inference speed compared with the canonical NeRF.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, ] directly optimizes a sparse 3D grid without any neural networks and achieves more than 100 times faster training speed up and also support real-time rendering. [Müller et al 2022] achieves near-instant training time (around 5s to 1min) of neural graphics primitives with a multi-resolution hash encoding. Though these works are very effective at speeding up NeRF, they only support static scenes.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations