2018
DOI: 10.48550/arxiv.1806.06575
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RenderNet: A deep convolutional network for differentiable rendering from 3D shapes

Abstract: Traditional computer graphics rendering pipelines are designed for procedurally generating 2D images from 3D shapes with high performance. The nondifferentiability due to discrete operations (such as visibility computation) makes it hard to explicitly correlate rendering parameters and the resulting image, posing a significant challenge for inverse rendering tasks. Recent work on differentiable rendering achieves differentiability either by designing surrogate gradients for non-differentiable operations or via… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(14 citation statements)
references
References 0 publications
0
14
0
Order By: Relevance
“…The specific combination of volume rendering with machine learning has been addressed in recent literature: [32] introduced RenderNet, a deep convnet that performs differentiable rendering of voxelized 3D shapes. Their proposed network consists of a 3D and a 2D convolutional part connected by a novel projection unit which combines the features along the viewing ray with an MLP.…”
Section: Differentiable Renderingmentioning
confidence: 99%
See 1 more Smart Citation
“…The specific combination of volume rendering with machine learning has been addressed in recent literature: [32] introduced RenderNet, a deep convnet that performs differentiable rendering of voxelized 3D shapes. Their proposed network consists of a 3D and a 2D convolutional part connected by a novel projection unit which combines the features along the viewing ray with an MLP.…”
Section: Differentiable Renderingmentioning
confidence: 99%
“…6, are progressing in complexity by replacing more of the functions with deep neural networks. For comparison, we also include Render-Net [32] as a baseline in our experiments as, among previously published works, this architecture is most closely related to our concepts. RenderNet resamples the input volume to view space using the perspective camera transform, such that the X-axis of the resampled volume corresponds to the ray directions and the YZ-axes correspond to the image coordinates.…”
Section: Deep Direct Volume Renderingmentioning
confidence: 99%
“…But these methods operate on 2D space of pixels, ignoring the 3d structure of our natural scene. 3Daware image synthesis correlates 3D model with 2D images, enabling explicit modification of 3D model [33,13,13,34,39,35,5,4]. Earlier 3D-aware image synthesis methods like RenderNet [33] introduces rendering convolutional networks with a projection unit that can render 2D images from 3D shapes.…”
Section: Related Workmentioning
confidence: 99%
“…Several methods operate on voxel grids [7,12,15,20]. Paschalidou et al [15] and Tulsiani et al [20] propose a probabilistic ray potential formulation.…”
Section: Related Workmentioning
confidence: 99%