2022
DOI: 10.48550/arxiv.2207.02621
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

VMRF: View Matching Neural Radiance Fields

Abstract: Neural Radiance Fields (NeRF) has demonstrated very impressive performance in novel view synthesis via implicitly modelling 3D representations from multi-view 2D images. However, most existing studies train NeRF models with either reasonable camera pose initialization or manually-crafted camera pose distributions which

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 36 publications
0
2
0
Order By: Relevance
“…But all these works typically require precisely known camera poses. There are only a few works (Jeong et al, 2021;Zhang et al, 2022;Lin et al, 2021) trying to deal with uncalibrated cameras. Furthermore, all these methods require a long optimization process, and hence, are unsuitable for real-time applications like visual SLAM.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…But all these works typically require precisely known camera poses. There are only a few works (Jeong et al, 2021;Zhang et al, 2022;Lin et al, 2021) trying to deal with uncalibrated cameras. Furthermore, all these methods require a long optimization process, and hence, are unsuitable for real-time applications like visual SLAM.…”
Section: Related Workmentioning
confidence: 99%
“…Multi-resolution Volume Encoding Directly representing the scene map with MLPs, which maps a 3D point to its occupancy and color, confronts a forgetting problem because the MLP is globally updated for any frame (Sucar et al, 2021). To address this, we equip the MLP with multi-resolution volumes {V l } L l=1 , which are updated locally on seen regions of each frame(Sara Fridovich-Keil and Alex Yu et al, 2022;Müller et al, 2022). The input point is encoded by the feature F sampled from the volumes {V l }, which could also explicitly store the geometric information.…”
Section: Implicit Map Representationmentioning
confidence: 99%