Kopf et al. [2013]Ö ztireli and Gross [2015] DPID λ=1.0 DPID λ=0.5 Figure 1: Row 1: Input images with 0.5, 1.9, 2.7, and 4.6 megapixels respectively. Rows 2-5: Downscaled results with 128 pixels width. Our algorithm (DPID) preserves stars in Example 1, thin lines in Example 2, roof tiles in Example 3, and text, lines and notes in Example 4.
In this article we use an ElectroEncephaloGraph (EEG) to explore the perception of artifacts that typically appear during rendering and determine the perceptual quality of a sequence of images. Although there is an emerging interest in using an EEG for image quality assessment, one of the main impediments to the use of an EEG is the very low Signal-to-Noise Ratio (SNR) which makes it exceedingly difficult to distinguish neural responses from noise. Traditionally, event-related potentials have been used for analysis of EEG data. However, they rely on averaging and so require a large number of participants and trials to get meaningful data. Also, due the the low SNR ERP's are not suited for single-trial classification.We propose a novel wavelet-based approach for evaluating EEG signals which allows us to predict the perceived image quality from only a single trial. Our wavelet-based algorithm is able to filter the EEG data and remove noise, eliminating the need for many participants or many trials. With this approach it is possible to use data from only 10 electrode channels for single-trial classification and predict the presence of an artifact with an accuracy of 85%. We also show that it is possible to differentiate and classify a trial based on the exact type of artifact viewed. Our work is particularly useful for understanding how the human visual system responds to different types of degradations in images and videos. An understanding of the perception of typical image-based rendering artifacts forms the basis for the optimization of rendering and masking algorithms.
Interactive exploration of animated volume data is required by many application, but the huge amount of computational time and storage space needed for rendering does not allow the visualization of animated volumes by now. In this paper we introduce an algorithm running at interactive frame rates using 3d wavelet transforms that allows for any wavelet, motion compensation techniques and various encoding schemes of the resulting wavelet coefficients to be used. We analyze different families and orders of wavelets for compression ratio and the introduced error. We use a quantization that has been optimized for the visual impression of the reconstructed volume independent of the viewing. This enables us to achieve very high compression ratios while still being able to reconstruct the volume with as few visual artifacts as possible. A further improvement of the compression ratio has been achieved by applying a motion compensation scheme to exploit temporal coherency. Using these scheme we are capable of decompressing each volume of our animation at interactive frame rates, while visualizing these decompressed volumes on a single PC. We also present a number of improved visualization algorithms for high quality display using OpenGL hardware running at interactive frame rates on a standard PC.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.