2020
DOI: 10.1007/978-3-030-58610-2_35
|View full text |Cite
|
Sign up to set email alerts
|

Semantic View Synthesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
2

Relationship

3
6

Authors

Journals

citations
Cited by 31 publications
(17 citation statements)
references
References 51 publications
0
17
0
Order By: Relevance
“…It shows that users preferred our results more than the baseline. e) Evaluating on other baselines: To show the generalization of our approach, we also applied the same procedure on pix2pixHD [12] and recently presented ASAPNet [51]. The results are in Table I.…”
Section: Methodsmentioning
confidence: 99%
“…It shows that users preferred our results more than the baseline. e) Evaluating on other baselines: To show the generalization of our approach, we also applied the same procedure on pix2pixHD [12] and recently presented ASAPNet [51]. The results are in Table I.…”
Section: Methodsmentioning
confidence: 99%
“…The data distribution is either modeled explicitly (e.g., variational autoencoder [17]) or implicitly (e.g., generative adversarial networks [5]). On the basis of unconditional generative models, conditional generative models target synthesizing images according to additional context such as image [4,19,25,33,45], segmentation mask [11,26,37,47], and text. The text conditions are often expressed in two formats: natural language sentences [40,42] or scene graphs [14].…”
Section: Related Workmentioning
confidence: 99%
“…Existing single-image view synthesis methods model the scene with point cloud [41,58], multi-plane image [56,22], or layered depth image [48,27]. Our method focuses on headshot portraits and uses an implicit function as the neural representation.…”
Section: Related Workmentioning
confidence: 99%