We present a new method for rendering high dynamic range images on conventional displays. Our method is conceptually simple, computationally efficient, robust, and easy to use. We manipulate the gradient field of the luminance image by attenuating the magnitudes of large gradients. A new, low dynamic range image is then obtained by solving a Poisson equation on the modified gradient field. Our results demonstrate that the method is capable of drastic dynamic range compression, while preserving fine details and avoiding common artifacts, such as halos, gradient reversals, or loss of local contrast. The method is also able to significantly enhance ordinary images by bringing out detail in dark regions.
Stochastic point distributions with blue-noise spectrum are used extensively in computer graphics for various applications such as avoiding aliasing artifacts in ray tracing, halftoning, stippling, etc. In this paper we present a new approach for generating point sets with high-quality blue noise properties that formulates the problem using a statistical mechanics interacting particle model. Points distributions are generated by sampling this model. This new formulation of the problem unifies randomness with the requirement for equidistant point spacing, responsible for the enhanced blue noise spectral properties. We derive a highly efficient multi-scale sampling scheme for drawing random point distributions from this model. The new scheme avoids the critical slowing down phenomena that plagues this type of models. This derivation is accompanied by a model-specific analysis. Altogether, our approach generates high-quality point distributions, supports spatially-varying spatial point density, and runs in time that is linear in the number of points generated.
Input: 3 MLIC ImagesOur Results: Enhanced Shape and Surface Detail AbstractWe present a new image-based technique for enhancing the shape and surface details of an object. The input to our system is a small set of photographs taken from a fixed viewpoint, but under varying lighting conditions. For each image we compute a multiscale decomposition based on the bilateral filter and then reconstruct an enhanced image that combines detail information at each scale across all the input images. Our approach does not require any information about light source positions, or camera calibration, and can produce good results with 3 to 5 input images. In addition our system provides a few high-level parameters for controlling the amount of enhancement and does not require pixel-level user input. We show that the bilateral filter is a good choice for our multiscale algorithm because it avoids the halo artifacts commonly associated with the traditional Laplacian image pyramid. We also develop a new scheme for computing our multiscale bilateral decomposition that is simple to implement, fast O(N 2 log N) and accurate.
In this paper we propose a new method for upsampling images which is capable of generating sharp edges with reduced inputresolution grid-related artifacts. The method is based on a statistical edge dependency relating certain edge features of two different resolutions, which is generically exhibited by real-world images. While other solutions assume some form of smoothness, we rely on this distinctive edge dependency as our prior knowledge in order to increase image resolution. In addition to this relation we require that intensities are conserved; the output image must be identical to the input image when downsampled to the original resolution. Altogether the method consists of solving a constrained optimization problem, attempting to impose the correct edge relation and conserve local intensities with respect to the low-resolution input image. Results demonstrate the visual importance of having such edge features properly matched, and the method's capability to produce images in which sharp edges are successfully reconstructed.
Edge-aware operations, such as edge-preserving smoothing and edge-aware interpolation, require assessing the degree of similarity between pairs of pixels, typically defined as a simple monotonic function of the Euclidean distance between pixel values in some feature space. In this work we introduce the idea of replacing these Euclidean distances with diffusion distances, which better account for the global distribution of pixels in their feature space. These distances are approximated using diffusion maps: a set of the dominant eigenvectors of a large affinity matrix, which may be computed efficiently by sampling a small number of matrix columns (the Nyström method). We demonstrate the benefits of using diffusion distances in a variety of image editing contexts, and explore the use of diffusion maps as a tool for facilitating the creation of complex selection masks. Finally, we present a new analysis that establishes a connection between the spatial interaction range between two pixels, and the number of samples necessary for accurate Nyström approximations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.