2022
DOI: 10.1007/978-3-031-19781-9_42
|View full text |Cite
|
Sign up to set email alerts
|

Sem2NeRF: Converting Single-View Semantic Masks to Neural Radiance Fields

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(4 citation statements)
references
References 50 publications
0
4
0
Order By: Relevance
“…A pioneering effort by PixelNeRF first integrated pixel features of source images into vanilla NeRF that only use position information. Additionally, IBRNet (Wang et al 2021), SRF (Chibane et al 2021) and MatchNeRF (Chen et al 2023b) contributed to scene reconstruction through the feature alignment of projected points from diverse perspectives. Besides, researchers have also explored novel forms of explicit 3D representation established from sparse images (Fang et al 2023), such as voxel mesh (Maturana and Scherer 2015;Sun, Sun, and Chen 2022;Huang et al 2019;Deng et al 2021), multiplane images (MPI) (Li et al 2021;Fontaine et al 2022), or layered depth images (LDI) (Tulsiani, Tucker, and Snavely 2018;Shih et al 2020).…”
Section: Related Workmentioning
confidence: 99%
“…A pioneering effort by PixelNeRF first integrated pixel features of source images into vanilla NeRF that only use position information. Additionally, IBRNet (Wang et al 2021), SRF (Chibane et al 2021) and MatchNeRF (Chen et al 2023b) contributed to scene reconstruction through the feature alignment of projected points from diverse perspectives. Besides, researchers have also explored novel forms of explicit 3D representation established from sparse images (Fang et al 2023), such as voxel mesh (Maturana and Scherer 2015;Sun, Sun, and Chen 2022;Huang et al 2019;Deng et al 2021), multiplane images (MPI) (Li et al 2021;Fontaine et al 2022), or layered depth images (LDI) (Tulsiani, Tucker, and Snavely 2018;Shih et al 2020).…”
Section: Related Workmentioning
confidence: 99%
“…HoloGAN [16] ICCV 2019 face, cat, car, LSUN camera pose unsupervised deep voxel Liao et al [42] CVPR 2020 multiple object data unconditional unsupervised 3D primitives BlockGAN [20] NeurIPS 2020 multiple object data camera pose unsupervised deep voxel GRAF [28] NeurIPS 2020 rendered chair, face, cat, bird camera matrix, pose unsupervised NeRF pi-GAN [2] CVPR 2021 face, car, CARLA camera position unsupervised NeRF GIRAFFE [32] CVPR 2021 chair, cat, face, car, church camera pose unsupervised compositional NeRF GOF [45] NeurIPS 2021 cat, car, face 3D location, camera pose unsupervised generative occupancy fields ShadeGAN [43] NeurIPS 2021 cat, face 3D location, camera pose unsupervised light field CAMPARI [39] 3DV 2021 cat, car, face, chair camera pose unsupervised decomposed NeRF StyleNeRF [1] ICLR 2022 face, cat, car camera pose unsupervised NeRF GRAM [49] CVPR 2022 cat, face, CARLA camera pose unsupervised EG3D [48] CVPR 2022 cat, face camera parameters unsupervised tri-plane 3D representation VolumeGAN [51] CVPR 2022 cat, car, face, bedroom, CARLA camera pose unsupervised StyleSDF [50] CVPR 2022 cat, face camera pose unsupervised SDF Pix2NeRF [71] CVPR 2022 face, CARLA, rendered image image unsupervised Sem2NeRF [57] ECCV 2022 cat, face semantic mask, camera pose semantic mask and image SURF-GAN [63] ECCV 2022 face camera pose unsupervised EpiGRAF [60] NeurIPS 2022 cat, face, variable-shape camera pose unsupervised tri-plane 3D representation IDE-3D [61] TOG 2022 face camera pose semantic mask and image tri-plane 3D representation control over the 3D parameters θ gives the edited result x . Here, θ could be human-interpretable attribute descriptions or a set of parameters from 3D models.…”
Section: D Generative Models (From Single-view Imagery)mentioning
confidence: 99%
“…We especially emphasize their efforts towards: 1) learning efficient and expressive geometry and appearance representations ( § 5.1.1); 2) developing accelerated and view-consistent rendering algorithms ( § 5.1.2); and 3) real-time and user-interactive editing ( § 5.1.3). We then present conditional 3D-aware generative models [57], [61], [71], [135], [136], [137] in § 5.2. In 2D generative models, "unconditional" methods are referred to as those merely inputting latent codes that are sampled from a prior distribution.…”
Section: D-aware Generative Modelsmentioning
confidence: 99%
See 1 more Smart Citation