2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00629
|View full text |Cite
|
Sign up to set email alerts
|

GNeRF: GAN-based Neural Radiance Field without Posed Camera

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
70
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 146 publications
(74 citation statements)
references
References 42 publications
0
70
0
Order By: Relevance
“…We also tested BARF's [16] coarse-to-fine adjustment method to improve our poses but found that the results were generally inferior to those provided by Pix4DMapper. SCNeRF [12], GN-eRF [20], and PixSFM [17] are all recent alternatives that merit further exploration.…”
Section: B Limitationsmentioning
confidence: 99%
“…We also tested BARF's [16] coarse-to-fine adjustment method to improve our poses but found that the results were generally inferior to those provided by Pix4DMapper. SCNeRF [12], GN-eRF [20], and PixSFM [17] are all recent alternatives that merit further exploration.…”
Section: B Limitationsmentioning
confidence: 99%
“…These methods can reconstruct high-quality 3D shapes and perform photo-realistic view synthesis, but they have several strong assumptions on the input data, including dense camera views, precise camera parameters, and constant lighting effects. More recently, some methods [3,13,25,26,30,35] have attempted to reduce the constraints on the input data. By appending an appearance embedding to each input image, [25] can recover 3D scenes from multi-view images with different lighting effects.…”
Section: Related Workmentioning
confidence: 99%
“…By appending an appearance embedding to each input image, [25] can recover 3D scenes from multi-view images with different lighting effects. [13,26] reconstructs neural radiance fields from very sparse views by applying a discriminator to supervise the synthesized images on novel views. Different from these methods requiring multi-view images, our approach can synthesize high-resolution images by training networks only on unstructured single-view image collections.…”
Section: Related Workmentioning
confidence: 99%
“…Neural Representations: In 3D vision, coordinate-based neural representations [7,33,34,44] have become a popular representation for various tasks such as 3D reconstruction [1,7,13,14,33,39,42,44,45,48,51,55,57], 3D-aware generative modelling [5,9,15,16,32,37,38,41,49,64], and novel-view synthesis [2,3,12,22,24,27,31,36,40,43,52,60,61]. In contrast to traditional representations like point clouds, meshes, or voxels, this paradigm represents 3D geometry and color information in the weights of a neural network, leading to a compact representation.…”
Section: Related Workmentioning
confidence: 99%