2017
DOI: 10.1007/s00371-017-1384-7
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of reported error in Monte Carlo rendered images

Abstract: Evaluating image quality in Monte Carlo rendered images is an important aspect of the rendering process as we often need to determine the relative quality between images computed using different algorithms and with varying amounts of computation. The use of a gold-standard, reference image, or ground truth is a common method to provide a baseline with which to compare experimental results. We show that if not chosen carefully, the quality of reference images used for image quality assessment can skew results l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 55 publications
0
4
0
Order By: Relevance
“…We analyze the output of the Convolutional Autoencoder in Section 6.1, perform an ablation study in Section 6.2 and evaluate image quality using established metrics like SSIM and Entropy in Section 6.3. SSIM is preferred in our experiments over, e.g., PSNR, since we are analyzing Monte Carlo rendered images [38]. The difference in amount and depth of indirect illumination 54 paths across the test scenes allows us to verify that our network 55 is able to deal with different input data quality seamlessly, and 56 that it is able to extrapolate and adapt to geometry not encoun-57 tered during training.…”
Section: Evaluation and Resultsmentioning
confidence: 97%
“…We analyze the output of the Convolutional Autoencoder in Section 6.1, perform an ablation study in Section 6.2 and evaluate image quality using established metrics like SSIM and Entropy in Section 6.3. SSIM is preferred in our experiments over, e.g., PSNR, since we are analyzing Monte Carlo rendered images [38]. The difference in amount and depth of indirect illumination 54 paths across the test scenes allows us to verify that our network 55 is able to deal with different input data quality seamlessly, and 56 that it is able to extrapolate and adapt to geometry not encoun-57 tered during training.…”
Section: Evaluation and Resultsmentioning
confidence: 97%
“…Both qualitative and quantitative metrics were used to benchmark the PT and BDPT algorithms. Regarding the qualitative metric to make the final image quality assessment, Structural Similarity Index Metric (SSIM) (Zhou Wang et al, 2004) is the preferred method for Monte Carlo rendered images (Whittle et al, 2017). We used a tool that computes (dis)similarity (DSSIM) between two or more PNG images using the SSIM algorithm at multiple weighed resolutions (Kornelski, 2020).…”
Section: Testing Methodologymentioning
confidence: 99%
“…Subr and Kautz [SK13] analyzed the error caused by various Monte Carlo (MC) sampling patterns via Fourier analysis, but their conclusions do not generalize. Whittle et al [WJM17] provided an extensive overview of error metrics for images, and they examined the influence of poor references for computing those measures. However, they did not investigate their variance, or sensitivity to outliers.…”
Section: Introductionmentioning
confidence: 99%
“…Our reference solutions were computed using a cluster and millions of samples per pixel ‐ several orders of magnitude more than were used for 〈 I 〉 N . To put this into perspective, Whittle et al recommended to use reference images with at least an order of magnitude more samples [WJM17]. If the T , 4 T , and 16 T curves overlap, then the algorithm behaves like in that frequency and rendering budget range.…”
Section: Introductionmentioning
confidence: 99%