2021
DOI: 10.48550/arxiv.2106.05429
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Direct Volume Rendering: Learning Visual Feature Mappings From Exemplary Images

Jakob Weiss,
Nassir Navab
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 31 publications
0
7
0
Order By: Relevance
“…OpenDR [71] is one of the earliest implementations of this idea. Although it assumes a simple lighting model and cannot render complex effects such as inter-reflections, OpenDR paved the way for an entire class of differentiable renderers, such as SoftRas [72], Mitsuba2 [73], PyRedner [74], among others [75] [76], which can handle ever more complex forward light simulations. While differentiable rendering can be used in isolation, the framework can also work alongside neural networks.…”
Section: Differentiable Renderingmentioning
confidence: 99%
See 1 more Smart Citation
“…OpenDR [71] is one of the earliest implementations of this idea. Although it assumes a simple lighting model and cannot render complex effects such as inter-reflections, OpenDR paved the way for an entire class of differentiable renderers, such as SoftRas [72], Mitsuba2 [73], PyRedner [74], among others [75] [76], which can handle ever more complex forward light simulations. While differentiable rendering can be used in isolation, the framework can also work alongside neural networks.…”
Section: Differentiable Renderingmentioning
confidence: 99%
“…State-of-theart techniques parameterize the scene with a neural network using a differentiable renderer for novel-view synthesis. Typically, volumetric rendering is used for the differentiable rendering component as its continuous representation has been shown to work well with gradient descent [76] [87], although there are a plethora of techniques that use different scene representations and rendering techniques [2], [78]. This physics and machine learning framework has been highly effective and subsequent works have added additional physics-based priors such as reflectance models [88], [89], normal estimation [90], and shadow models [91] to enable better novel-view synthesis and 3D reconstruction.…”
Section: Neural Renderingmentioning
confidence: 99%
“…We also refer to their research tasks when categorizing the surveyed papers in the respective tables according to learning type, network architecture, loss function, and evaluation metric. The description [51] TSR-TVD TVCG Han and Wang [50] SSR-TVD TVCG Han et al [55] STNet TVCG Wurster et al [167] arXiv Guo et al [47] SSR-VFD PVIS Jakob et al [76] TVCG Sahoo and Berger [126] IA-VFS EVIS An et al [2] STSRNet CG&A Han and Wang [53] TSR-VFD C&G Xie et al [168] tempoGAN TOG Werhahn et al [162] CGIT Wang et al [156] DeepOrganNet TVCG Lu et al [109] neurcomp CGF Weiss et al [160] fV-SRN arXiv Shi et al [131] GNN-Surrogate TVCG Han and Wang [54] VCNet VI Liu et al [106] JOV Han et al [49] CG&A Gu et al [45] VFR-UFD CG&A Han et al [56] V2V TVCG Gu et al [46] Scalar2Vec PVIS Kim et al [84] Deep Fluids CGF Chu et al [27] TOG Wiewel et al [163] LSP CGF Wiewel et al [164] LSS CGF Berger et al [12] TVCG Hong et al [70] DNN-VolVis PVIS He et al [63] InSituNet TVCG Weiss et al [159] TVCG Weiss et al [161] TVCG Weiss and Navab [158] DeepDVR arXiv He et al [62] CECAV-DNN VI Tkachev et al [143] TVCG Hong et al [71] PVIS Kim and Günther [85] CGF Han et al [57] arXiv Yang et al [169] JOV Shi and Tao [130] TIST Engel and Ropinski …”
Section: Dl4scivis Workmentioning
confidence: 99%
“…LSP can achieve 150× speedups compared with a regular pressure solver, a significant boost in simulation performance. Wiewel et al [164] proposed latent space [12] new viewpoint and transfer function synthesized rendering conditioned on input Hong et al [70] DNN-VolVis original rendering, goal effect, new viewpoint synthesized rendering conditioned on input He et al [63] InSituNet ensemble simulation parameters synthesized rendering conditioned on input Weiss et al [159] low-resolution isosurface maps, optical flow high-resolution isosurface maps Weiss et al [161] low-resolution image high-resolution image Weiss and Navab [158] DeepDVR volume, viewpoint rendering image CECAV-DNN sequence of ensemble pairs likelihood each member from one ensemble Tkachev et al [143] local spatiotemporal patch future voxel value at patch center Hong et al [71] movement sequence probability vector of next movement Kim and Günther [85] unsteady 2D vector field reference frame transformation Han et al [57] particle start location, file cycles particle end location Yang et al [169] volume rendering under viewpoint viewpoint quality score Shi and Tao [130] volume rendering image estimated viewpoint Engel and Ropinski [35] DVAO intensity volume, opacity volume or transfer function AO volume subdivision (LSS), an end-to-end DL-solution for robust prediction future timesteps of complex fluid simulations with high temporal stability. Using CNN and stacked LSTM, LSS achieves both spatial compression and temporal prediction.…”
Section: Compression and Reconstruction [ ]mentioning
confidence: 99%
See 1 more Smart Citation