2021
DOI: 10.1109/tvcg.2020.3030344
|View full text |Cite
|
Sign up to set email alerts
|

Deep Volumetric Ambient Occlusion

Abstract: Full Render with AO AO only Without AO Ground Truth AO Fig. 1: Volume rendering with volumetric ambient occlusion achieved through Deep Volumetric Ambient Occlusion (DVAO). DVAO uses a 3D convolutional encoder-decoder architecture, to predict ambient occlusion volumes for a given combination of volume data and transfer function. While we introduce and compare several representation and injection strategies for capturing the transfer function information, the shown images result from preclassified injection bas… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 45 publications
0
15
0
Order By: Relevance
“…We also refer to their research tasks when categorizing the surveyed papers in the respective tables according to learning type, network architecture, loss function, and evaluation metric. The description [51] TSR-TVD TVCG Han and Wang [50] SSR-TVD TVCG Han et al [55] STNet TVCG Wurster et al [167] arXiv Guo et al [47] SSR-VFD PVIS Jakob et al [76] TVCG Sahoo and Berger [126] IA-VFS EVIS An et al [2] STSRNet CG&A Han and Wang [53] TSR-VFD C&G Xie et al [168] tempoGAN TOG Werhahn et al [162] CGIT Wang et al [156] DeepOrganNet TVCG Lu et al [109] neurcomp CGF Weiss et al [160] fV-SRN arXiv Shi et al [131] GNN-Surrogate TVCG Han and Wang [54] VCNet VI Liu et al [106] JOV Han et al [49] CG&A Gu et al [45] VFR-UFD CG&A Han et al [56] V2V TVCG Gu et al [46] Scalar2Vec PVIS Kim et al [84] Deep Fluids CGF Chu et al [27] TOG Wiewel et al [163] LSP CGF Wiewel et al [164] LSS CGF Berger et al [12] TVCG Hong et al [70] DNN-VolVis PVIS He et al [63] InSituNet TVCG Weiss et al [159] TVCG Weiss et al [161] TVCG Weiss and Navab [158] DeepDVR arXiv He et al [62] CECAV-DNN VI Tkachev et al [143] TVCG Hong et al [71] PVIS Kim and Günther [85] CGF Han et al [57] arXiv Yang et al [169] JOV Shi and Tao [130] TIST Engel and Ropinski …”
Section: Dl4scivis Workmentioning
confidence: 99%
See 2 more Smart Citations
“…We also refer to their research tasks when categorizing the surveyed papers in the respective tables according to learning type, network architecture, loss function, and evaluation metric. The description [51] TSR-TVD TVCG Han and Wang [50] SSR-TVD TVCG Han et al [55] STNet TVCG Wurster et al [167] arXiv Guo et al [47] SSR-VFD PVIS Jakob et al [76] TVCG Sahoo and Berger [126] IA-VFS EVIS An et al [2] STSRNet CG&A Han and Wang [53] TSR-VFD C&G Xie et al [168] tempoGAN TOG Werhahn et al [162] CGIT Wang et al [156] DeepOrganNet TVCG Lu et al [109] neurcomp CGF Weiss et al [160] fV-SRN arXiv Shi et al [131] GNN-Surrogate TVCG Han and Wang [54] VCNet VI Liu et al [106] JOV Han et al [49] CG&A Gu et al [45] VFR-UFD CG&A Han et al [56] V2V TVCG Gu et al [46] Scalar2Vec PVIS Kim et al [84] Deep Fluids CGF Chu et al [27] TOG Wiewel et al [163] LSP CGF Wiewel et al [164] LSS CGF Berger et al [12] TVCG Hong et al [70] DNN-VolVis PVIS He et al [63] InSituNet TVCG Weiss et al [159] TVCG Weiss et al [161] TVCG Weiss and Navab [158] DeepDVR arXiv He et al [62] CECAV-DNN VI Tkachev et al [143] TVCG Hong et al [71] PVIS Kim and Günther [85] CGF Han et al [57] arXiv Yang et al [169] JOV Shi and Tao [130] TIST Engel and Ropinski …”
Section: Dl4scivis Workmentioning
confidence: 99%
“…LSP can achieve 150× speedups compared with a regular pressure solver, a significant boost in simulation performance. Wiewel et al [164] proposed latent space [12] new viewpoint and transfer function synthesized rendering conditioned on input Hong et al [70] DNN-VolVis original rendering, goal effect, new viewpoint synthesized rendering conditioned on input He et al [63] InSituNet ensemble simulation parameters synthesized rendering conditioned on input Weiss et al [159] low-resolution isosurface maps, optical flow high-resolution isosurface maps Weiss et al [161] low-resolution image high-resolution image Weiss and Navab [158] DeepDVR volume, viewpoint rendering image CECAV-DNN sequence of ensemble pairs likelihood each member from one ensemble Tkachev et al [143] local spatiotemporal patch future voxel value at patch center Hong et al [71] movement sequence probability vector of next movement Kim and Günther [85] unsteady 2D vector field reference frame transformation Han et al [57] particle start location, file cycles particle end location Yang et al [169] volume rendering under viewpoint viewpoint quality score Shi and Tao [130] volume rendering image estimated viewpoint Engel and Ropinski [35] DVAO intensity volume, opacity volume or transfer function AO volume subdivision (LSS), an end-to-end DL-solution for robust prediction future timesteps of complex fluid simulations with high temporal stability. Using CNN and stacked LSTM, LSS achieves both spatial compression and temporal prediction.…”
Section: Compression and Reconstruction [ ]mentioning
confidence: 99%
See 1 more Smart Citation
“…For DVR, deep learning can exploit the learned features to assist in parameter setting in the visualization. For example, Engel and Ropinski (2021) proposed deep volumetric ambient occlusion (DVAO) that combines global unstructured information and 3D CNN operations to compute volumetric ambient occlusion and thereby enhance the quality of interactive DVR. Berger et al (2018) leveraged a generative adversarial network (GAN) to learn a viewinvariant latent space and encode how transfer functions affected the rendered results to assist users in transfer function design.…”
Section: Deep Learning For Volume Visualizationmentioning
confidence: 99%
“…Ament and Dachsbacher [1] also proposed a way to compute anisotropic shading of surface-like structures, but requiring better investigation on its perceptual benefits. Recent works also investigate the use of denoising [25] for volumetric path tracing and 3D convolutional neural networks (CNNs) [18] to approximate ambient occlusion.…”
Section: Volumetric Illuminationmentioning
confidence: 99%