2018
DOI: 10.1145/3272127.3275109
|View full text |Cite
|
Sign up to set email alerts
|

Differentiable Monte Carlo ray tracing through edge sampling

Abstract: times running from seconds to minutes depending on scene complexity and desired precision. We interface our differentiable ray tracer with the deep learning library PyTorch and show prototype applications in inverse rendering and the generation of adversarial examples for neural networks.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
320
1
2

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 354 publications
(325 citation statements)
references
References 49 publications
2
320
1
2
Order By: Relevance
“…The differentiable projection function f (·) could be implemented via existing differentiable renderers [15], however we experimentally found them to be unstable when the parameters are far from their target, in addition to requiring significant computations for each scene. Instead, we implement f (·) by projecting each light source on the sphere onto a spherical gaussian using the mapping…”
Section: Training Step 1: Radius Color and Positionmentioning
confidence: 99%
“…The differentiable projection function f (·) could be implemented via existing differentiable renderers [15], however we experimentally found them to be unstable when the parameters are far from their target, in addition to requiring significant computations for each scene. Instead, we implement f (·) by projecting each light source on the sphere onto a spherical gaussian using the mapping…”
Section: Training Step 1: Radius Color and Positionmentioning
confidence: 99%
“…'C*N(kS)' denotes convolution layers with N S × S filters with stride 2, followed by Batch Normalization and ReLU. BU (18,36) upsamples the response to produce 18 × 36 × 3 resolution environment map. Each 'ResBLK' contains Conv256(k3) -BN -ReLU -Conv256(k3) -BN, where 'ConvN(kS)' denotes convolution layers with N S×S filters of stride 1, 'BN' denoted Batch Normalization.…”
Section: Environment Map Estimatormentioning
confidence: 99%
“…': It first concatenates the responses of 'Enc', 'Normal ResBLKs' and 'Albedo ResBLKs' to produce a blob of spatial resolution 768 × 60 × 80. It is further processed by the following module: C256(k1) -C*256(k3) -C*128(k3) -C*3(k3) -BU (18,36) 'CN(kS)' denotes convolution layers with N S × S filters with stride 1, followed by Batch Normalization and ReLU. 'C*N(kS)' denotes convolution layers with N S × S filters with stride 2, followed by Batch Normalization and ReLU.…”
Section: Training On Synthetic Datamentioning
confidence: 99%
See 1 more Smart Citation
“…HoloGAN first learns a 3D representation, which is then transformed to a target pose, projected to 2D features, and rendered to generate the final images ( Figure 2 right). Different from recent work that employs hand-crafted differentiable renderers [18,22,29,34,36,51,64], HoloGAN learns perspective projection and rendering of 3D features from scratch using a projection unit [40]. This novel architecture enables HoloGAN to learn 3D representations directly from natural images for which there are no good hand-crafted differentiable renderers.…”
Section: Introductionmentioning
confidence: 99%