2021
DOI: 10.48550/arxiv.2111.13112
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

VaxNeRF: Revisiting the Classic for Voxel-Accelerated Neural Radiance Field

Abstract: Neural Radiance Field (NeRF) is a popular method in data-driven 3D reconstruction. Given its simplicity and high quality rendering, many NeRF applications are being developed. However, NeRF's big limitation is its slow speed. Many attempts are made to speeding up NeRF training and inference, including intricate code-level optimization and caching, use of sophisticated data structures, and amortization through multi-task and meta learning. In this work, we revisit the basic building blocks of NeRF through the l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 66 publications
0
5
0
Order By: Relevance
“…Although refiNeRF shows promising results for both static and dynamic scenes, it shares the same limitations as the original NeRF approach, such as slow optimization and rendering, the requirement of dense 3D sampling, and dependence on heuristic coarse-to-fine scheduling strategies. Through the application of recent advancements such as iNGP [24] and vaxNeRF [17], we attempt to bypass some of the aforementioned limitations and believe that this framework will accelerate the widespread adoption of NeRFs.…”
Section: Discussionmentioning
confidence: 99%
“…Although refiNeRF shows promising results for both static and dynamic scenes, it shares the same limitations as the original NeRF approach, such as slow optimization and rendering, the requirement of dense 3D sampling, and dependence on heuristic coarse-to-fine scheduling strategies. Through the application of recent advancements such as iNGP [24] and vaxNeRF [17], we attempt to bypass some of the aforementioned limitations and believe that this framework will accelerate the widespread adoption of NeRFs.…”
Section: Discussionmentioning
confidence: 99%
“…For example, Deng et al 54 implemented one of the neural representation 35 using a JAX library 55 and achieved 20-30 times faster training time compared to the official Tensorflow 56 implementation. Kondo et al 57 sped up the training by two to eight times by efficiently sampling the coordinated only the inside of the bounding volume (i.e., patient body in case of CT images). Tancik et al 58 reported that they could achieve the high-quality neural representation in 10-20 times less training iterations via meta-learned initialization 59,60 , which finds a good initial point of network parameters by optimizing them to several different scenes or objects alternately.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, PlenOctrees [72] use a hierarchical octree structure of the density and the viewdependent radiance in terms of spherical harmonics (SH) to entirely avoid network evaluations. Improving the convergence of the training process has also been investigated by using additional data such as depth maps [11] or a visual hull computed from binary foreground masks [24] as an additional guidance. Furthermore, meta learning approaches allow for a more effective initialization compared to random weights [55].…”
Section: Acceleration Of Nerf Training and Renderingmentioning
confidence: 99%