The tesselated spheres in the left image are rendered with two different types of a blue plastic BRDF, yet they are perceived as made from the same material. The objects in the right image are rendered with an identical blue plastic BRDF, yet their appearance is very different.
Image textures can easily be created using texture synthesis by example. However, creating procedural textures is much more difficult. This is unfortunate, since procedural textures have significant advantages over image textures. In this paper we address the problem of texture synthesis by example for procedural textures. We introduce a method for procedural multiresolution noise by example. Our method computes the weights of a procedural multiresolution noise, a simple but common class of procedural textures, from an example. We illustrate this method by using it as a key component in a method for texture synthesis by example for isotropic stochastic procedural textures. Our method significantly facilitates the creation of these procedural textures.
Abstract-Immersive spaces such as 4-sided displays with stereo viewing and high-quality tracking provide a very engaging and realistic virtual experience. However, walking is inherently limited by the restricted physical space, both due to the screens (limited translation) and the missing back screen (limited rotation). In this paper, we propose three novel locomotion techniques that have three concurrent goals: keep the user safe from reaching the translational and rotational boundaries; increase the amount of real walking and finally, provide a more enjoyable and ecological interaction paradigm compared to traditional controller-based approaches. We notably introduce the "Virtual Companion", which uses a small bird to guide the user through VEs larger than the physical space. We evaluate the three new techniques through a user study with travel-to-target and path following tasks. The study provides insight into the relative strengths of each new technique for the three aforementioned goals. Specifically, if speed and accuracy are paramount, traditional controller interfaces augmented with our novel warning techniques may be more appropriate; if physical walking is more important, two of our paradigms (extended Magic Barrier Tape and Constrained Wand) should be preferred; last, fun and ecological criteria would favor the Virtual Companion.
Most previous work on gloss perception has examined the strength and sharpness of specular reflections in simple bidirectional reflectance distribution functions (BRDFs) having a single specular component. However, BRDFs can be substantially more complex and it is interesting to ask how many additional perceptual dimensions there could be in the visual representation of surface reflectance qualities. To address this, we tested materials with two specular components that elicit an impression of hazy gloss. Stimuli were renderings of irregularly shaped objects under environment illumination, with either a single Ward specular BRDF component (Ward, 1992), or two such components, with the same total specular reflectance but different sharpness parameters, yielding both sharp and blurry highlights simultaneously. Differently shaped objects were presented side by side in matching, discrimination, and rating tasks. Our results show that observers mainly attend to the sharpest reflections in matching tasks, but they can indeed discriminate between single-component and two-component specular materials in discrimination and rating tasks. The results reveal an additional perceptual dimension of gloss-beyond strength and sharpness-akin to "haze gloss" (Hunter & Harold, 1987). However, neither the physical measurements of Hunter and Harold nor the kurtosis of the specular term predict perception in our tasks. We suggest the visual system may use a decomposition of specular reflections in the perception of hazy gloss, and we compare two possible candidates: a physical representation made of two gloss components, and an alternative representation made of a central gloss component and a surrounding halo component.
Image-based rendering (IBR)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.