2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00782
|View full text |Cite
|
Sign up to set email alerts
|

Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
68
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 169 publications
(68 citation statements)
references
References 35 publications
0
68
0
Order By: Relevance
“…Despite synthesizing highfidelity novel views, these methods require longstanding optimization process and cannot generalize to new scenes. 2) the second track attempts to learn generalize neural radiance field across multiple scenes [3,6,54,57,66]. Among this, pixelNeRF [66] is the most relevant method to ours, which Given a single input image, we first 1) extract the spatial feature map using a fully convolutional image encoder E, 2) learn a volumetric grid through volume generator G V , and 3) regress a surface point set of the object through a point set generator GS .…”
Section: Novel View Synthesis and Neural Radiance Fieldmentioning
confidence: 99%
See 1 more Smart Citation
“…Despite synthesizing highfidelity novel views, these methods require longstanding optimization process and cannot generalize to new scenes. 2) the second track attempts to learn generalize neural radiance field across multiple scenes [3,6,54,57,66]. Among this, pixelNeRF [66] is the most relevant method to ours, which Given a single input image, we first 1) extract the spatial feature map using a fully convolutional image encoder E, 2) learn a volumetric grid through volume generator G V , and 3) regress a surface point set of the object through a point set generator GS .…”
Section: Novel View Synthesis and Neural Radiance Fieldmentioning
confidence: 99%
“…learns the scene priors conditioned on the pixel-aligned features, and can switch to new scenes flexibly. Although other methods [3,6,54,57] can also be applied to novel scene through a single forward pass, their methods are equipped to multiple input views, while we focus on the more challenging single-view input setting.…”
Section: Novel View Synthesis and Neural Radiance Fieldmentioning
confidence: 99%
“…The idea of projecting information from source views to 3D then using the neural radiance field framework to render a target view has also been used in learning-based approaches [3,4,6,33,35,43]. These approaches train on multiple scenes networks that take as input features from the source views aggregated at a given 3D point and output radiance and occupancy for this point.…”
Section: Related Workmentioning
confidence: 99%
“…Bemana et al [5] works in static settings but predicts not only the radiance field but also lighting given varying illumination data. Chibane et al [14] trade instant depth predictions and synthesis for the requirement of multiple images. Alternatively, volumetric representations [39,40] can also being utilized for capturing dynamic scenes.…”
Section: Related Workmentioning
confidence: 99%