2022 IEEE International Conference on Multimedia and Expo (ICME) 2022
DOI: 10.1109/icme52920.2022.9859817
|View full text |Cite
|
Sign up to set email alerts
|

Omni-NeRF: Neural Radiance Field from 360° Image Captures

Abstract: This paper tackles the problem of novel view synthesis (NVS) from 360 • images with imperfect camera poses or intrinsic parameters. We propose a novel end-to-end framework for training Neural Radiance Field (NeRF) models given only 360 • RGB images and their rough poses, which we refer to as Omni-NeRF. We extend the pinhole camera model of NeRF to a more general camera model that better fits omni-directional fish-eye lenses. The approach jointly learns the scene geometry and optimizes the camera parameters wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…Multi-sphere [3] and multi-cylinder [15] images allocate blending weights and colors onto discrete surfaces, enabling the synthesis of free viewpoint panoramas. The Omni-NeRF [16] extends NeRF to be trained with raw fisheye captures. Some works [18,37] focus on photo-realistic renderings from a single ERP image with pre-acquired depth information.…”
Section: Omnidirectional Imagingmentioning
confidence: 99%
See 1 more Smart Citation
“…Multi-sphere [3] and multi-cylinder [15] images allocate blending weights and colors onto discrete surfaces, enabling the synthesis of free viewpoint panoramas. The Omni-NeRF [16] extends NeRF to be trained with raw fisheye captures. Some works [18,37] focus on photo-realistic renderings from a single ERP image with pre-acquired depth information.…”
Section: Omnidirectional Imagingmentioning
confidence: 99%
“…We first want to find the optimal settings for combining the hashencoding (HE) and frequency-encoding (FE) levels. To achieve this, we performed a parameter sweep across different frequency levels (2,4,8,16) while keeping the levels of the hash grid to be fixed at the default value of 16. We conduct our experiment on the synthetic datasets of ODIs from the 2 Blender demos ("Classroom", "Lone Monk") [1].…”
Section: Hash-frequency Encodingmentioning
confidence: 99%
“…OmniNeRF [44] synthesizes novel fish-eye projection images, using spherical sampling to improve the quality of results. 360Roam [45] is a scene-level NeRF system that can synthesize images of large-scale indoor scenes in real-time and support VR roaming.…”
Section: B 360 Panorama View Synthesismentioning
confidence: 99%
“…This function maps a 5D input -3D spatial coordinates and 2D viewing directions -into a viewdependent RGB triplet and one value corresponding to the volume density. NeRF's success has led to several extension works targeting its limitations, such as generalization [2], relighting [3], and different imaging input types [4], [5].…”
Section: Introductionmentioning
confidence: 99%