Figure 1: Starting from a common layout (Left), the user's objective is inferred from placement of three primitives (push pins), leading to a layout organized vertically by size (Middle) and after a different placement additionally by brightness horizontally (Right). AbstractWe propose an approach to "pack" a set of two-dimensional graphical primitives into a spatial layout that follows artistic goals. We formalize this process as projecting from a high-dimensional feature space into a 2D layout. Our system does not expose the control of this projection to the user in form of sliders or similar interfaces. Instead, we infer the desired layout of all primitives from interactive placement of a small subset of example primitives. To produce a pleasant distribution of primitives with spatial extend, we propose a novel generalization of Centroidal Voronoi Tesselation which equalizes the distances between boundaries of nearby primitives. Compared to previous primitive distribution approaches our GPU implementation achieves both better fidelity and asymptotically higher speed. A user study evaluates the system's usability.
We propose projective blue-noise patterns that retain their blue-noise characteristics when undergoing one or multiple projections onto lower dimensional subspaces. These patterns are produced by extending existing methods, such as dart throwing and Lloyd relaxation, and have a range of applications. For numerical integration, our patterns often outperform state-of-the-art stochastic and low-discrepancy patterns, which have been specifically designed only for this purpose. For image reconstruction, our method outperforms traditional blue-noise sampling when the variation in the signal is concentrated along one dimension. Finally, we use our patterns to distribute primitives uniformly in 3D space such that their 2D projections retain a blue-noise distribution.
Mesh warp 2.3 fps Our method 35.4 fps Ground truthFigure 1: Our new image-based rendering algorithm warps original views (with depth, not shown) to produce novel views. It performs as fast as the fastest competitors but maintains much higher visual quality for larger displacements. The insets compare against other common approaches and show the RGB-DSSIM difference. Frame rates are computed on an an Intel Compute Stick at a resolution of 1280×720.Abstract VR headsets and hand-held devices are not powerful enough to render complex scenes in real-time. A server can take on the rendering task, but network latency prohibits a good user experience. We present a new image-based rendering (IBR) architecture for masking the latency. It runs in real-time even on very weak mobile devices, supports modern game engine graphics, and maintains high visual quality even for large view displacements. We propose a novel server-side dual-view representation that leverages an optimally-placed extra view and depth peeling to provide the client with coverage for filling disocclusion holes. This representation is directly rendered in a novel wide-angle projection with favorable directional parameterization. A new client-side IBR algorithm uses a pre-transmitted level-of-detail proxy with an encaging simplification and depth-carving to maintain highly complex geometric detail. We demonstrate our approach with typical VR / mobile gaming applications running on mobile hardware. Our technique compares favorably to competing approaches according to perceptual and numerical comparisons.
Size matters. Human perception most naturally relates relative extent, area or volume to importance, nearness and weight. Reversely, conveying importance of something by depicting it at a different size is a classic artistic principle, in particular when importance varies across a domain. One striking example is the neuronal homunculus; a human figure where the size of each body part is proportional to the neural density on that part. In this work we propose an approach which changes local size of a 2D image or 3D surface and, at the same time, minimizes distortion, prevails smoothness, and, most importantly, avoids fold-overs (collisions). We employ a parallel, two-stage optimization process, that scales the shape non-uniformly according to an interactively-defined importance map and then solves for a nearby, self-intersection-free configuration. The results include an interactive 3D-rendered version of the classic sensorical homunculus but also a range of images and surfaces with different importance maps.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.