2021
DOI: 10.48550/arxiv.2103.13744
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

Abstract: NeRF synthesizes novel views of a scene with unprecedented quality by fitting a neural radiance field to RGB images. However, NeRF requires querying a deep Multi-Layer Perceptron (MLP) millions of times, leading to slow rendering times, even on modern GPUs. In this paper, we demonstrate that significant speed-ups are possible by utilizing thousands of tiny MLPs instead of one single large MLP. In our setting, each individual MLP only needs to represent parts of the scene, thus smaller and faster-to-evaluate ML… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
49
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 32 publications
(49 citation statements)
references
References 48 publications
0
49
0
Order By: Relevance
“…At the low level, we plan to further optimize the GPU backend, and accelerate the CPU counterpart, potentially with cache level optimization and code generation [29], [55]. We also plan to apply ASH to sparse convolution [43], [56] and neural rendering [57], [58], where spatially varying parameterizations are exploited. ASH accelerates a variety of 3D perception workloads.…”
Section: Discussionmentioning
confidence: 99%
“…At the low level, we plan to further optimize the GPU backend, and accelerate the CPU counterpart, potentially with cache level optimization and code generation [29], [55]. We also plan to apply ASH to sparse convolution [43], [56] and neural rendering [57], [58], where spatially varying parameterizations are exploited. ASH accelerates a variety of 3D perception workloads.…”
Section: Discussionmentioning
confidence: 99%
“…The field of INRs is progressing rapidly and we will likely be able to take advantage of this progress, including hybrid representations (Martel et al, 2021), better activation functions (Ramasinghe & Lucey, 2021) and reduced memory consumption (Huang et al, 2021). There has also been a plethora of work on improving NeRF (Barron et al, 2021;Reiser et al, 2021;Yu et al, 2021;Piala & Clark, 2021), which should be directly applicable to our framework. Further, recent works have shown promise for storing compressed datasets as functions (Dupont et al, 2021a;Chen et al, 2021;Strümpler et al, 2021;Zhang et al, 2021).…”
Section: Conclusion Limitations and Future Workmentioning
confidence: 99%
“…Although, our model is naturally faster than NeRF (3X) due to that we skip the shading in empty space. We believe future works on combining mechanisms introduced in current papers such as [42,61] with our point-based radiance representation would further benefit the neural rendering technology.…”
Section: Limitationsmentioning
confidence: 99%