2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00161
|View full text |Cite
|
Sign up to set email alerts
|

LOLNeRF: Learn from One Look

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 74 publications
(20 citation statements)
references
References 46 publications
0
20
0
Order By: Relevance
“…An example of synthetic data rendering process is to center a scan (e.g., objects from ScanNet [66], ShapeNet [67], or DeepVoxels [14]) at the origin, scale it to lie within the unit cube, and render images at sampled viewpoints. The training set can be obtained by 2019 2020 2021 2022 2023 S 2 -GAN [12] PrGAN [13] DeepVoxel [14] VON [15] HoloGAN [16] SRN [17] GANSteerability [18] RGBD-GAN [19] DVR [7] BlockGAN [20] GANLatentDiscovery [21] NeRF [5] StyleRig [22] CONFIG [23] GANSpace [24]) InterFaceGAN [25]) NGP [26] SeFa [27] GRAF [28] NeRF-W [29] PIE [30] NeRF++ [31] GIRAFFE [32] pi-GAN [2] PixelNeRF [33] GAN-Control [34] NeRF- [35] KiloNeRF [36] Mip-NeRF [37] FastNeRF [38] CAMPARI [39] BARF [40] VariTex [41] Liao et al [42] ShadeGAN [43] CIPS-3D [44] StyleNeRF [1] GOF [45] LOLNeRF [46] URF [47] EG3D [48] GRAM [49] StyleSDF ...…”
Section: Multiple-view Image Collectionsmentioning
confidence: 99%
See 3 more Smart Citations
“…An example of synthetic data rendering process is to center a scan (e.g., objects from ScanNet [66], ShapeNet [67], or DeepVoxels [14]) at the origin, scale it to lie within the unit cube, and render images at sampled viewpoints. The training set can be obtained by 2019 2020 2021 2022 2023 S 2 -GAN [12] PrGAN [13] DeepVoxel [14] VON [15] HoloGAN [16] SRN [17] GANSteerability [18] RGBD-GAN [19] DVR [7] BlockGAN [20] GANLatentDiscovery [21] NeRF [5] StyleRig [22] CONFIG [23] GANSpace [24]) InterFaceGAN [25]) NGP [26] SeFa [27] GRAF [28] NeRF-W [29] PIE [30] NeRF++ [31] GIRAFFE [32] pi-GAN [2] PixelNeRF [33] GAN-Control [34] NeRF- [35] KiloNeRF [36] Mip-NeRF [37] FastNeRF [38] CAMPARI [39] BARF [40] VariTex [41] Liao et al [42] ShadeGAN [43] CIPS-3D [44] StyleNeRF [1] GOF [45] LOLNeRF [46] URF [47] EG3D [48] GRAM [49] StyleSDF ...…”
Section: Multiple-view Image Collectionsmentioning
confidence: 99%
“…Human Face 30k 1024 × 1024 [46] single, simple-shape MetFaces [83] NeurIPS 2020 Art Face 1336 1024 × 1024 [1] single, simple-shape M-Plants [60] NeurIPS 2022 Variable-Shape 141,824 256 × 256 [60] single, variable-shape M-Food [60] NeurIPS 2022 Variable-Shape 25,472 256 × 256 [60] single, variable-shape top of a pretrained StyleGAN, can be further categorized into three groups based on how the 3D control capability is introduced: 1) discovering 3D control latent directions, 2) adopting explicit control over the 3D parameters, and 3) introducing 3D-aware components into 2D GANs. Fig.…”
Section: D Control Of 2d Generative Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…StyleNeRF and CIPS-3D [16,72] combine a shallow NeRF network to provide low-resolution radiance fields and a 2D rendering network to produce high-resolution images with fine details. LolNeRF [39] learns 3D objects by optimizing foreground and background NeRFs together with a learnable per-image table of latent codes. Zhao et al [71] develop a generative multi-plane image (GMPI) representation to ensure view-consistency.…”
Section: Related Workmentioning
confidence: 99%