2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00282
|View full text |Cite
|
Sign up to set email alerts
|

Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement

Abstract: We present a method to improve the visual realism of low-quality, synthetic images, e.g. OpenGL renderings. Training an unpaired synthetic-to-real translation network in image space is severely under-constrained and produces visible artifacts. Instead, we propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image. Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets, and further incr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 38 publications
(26 citation statements)
references
References 31 publications
0
26
0
Order By: Relevance
“…The image classification is used for the DI-HV. CG Real dataset is proposed by [3], which translates between synthetic indoor images and real indoor scenes. Semantic scene parsing is used as the DI-HV, where we generate the pseudo semantic segmentation mask by HRNet [49].…”
Section: Methodsmentioning
confidence: 99%
“…The image classification is used for the DI-HV. CG Real dataset is proposed by [3], which translates between synthetic indoor images and real indoor scenes. Semantic scene parsing is used as the DI-HV, where we generate the pseudo semantic segmentation mask by HRNet [49].…”
Section: Methodsmentioning
confidence: 99%
“…Early approaches used simple hand‐modelled scenes, populated with either single or few objects, with accurate light transport simulation enabled by photon mapping [BSvdW*13]. Later methods built their scene content using 3D models and scene databases, along with rendering randomization, measured materials and environment maps for global illumination, while employing various flavours of path‐tracing and tone mapping algorithms [RRF*16, LS18, BLG18, BSP*19] (Figure 15a). In the same spirit, Baslamisli et al .…”
Section: Image Synthesis Methods Overviewmentioning
confidence: 99%
“…Camera design [33], [154], [153] Noise modeling [145], [3], [28] Intrinsic decomposition [23], [196], [148], [20] 16 [21], [266], [27], [225] Table 2.1: Computer vision applications that benefit from synthetic training data, along with associated image synthesis approaches (superscripts denote cross-application approach).…”
Section: Computational Photography and Image Formationmentioning
confidence: 99%