Motion blur in a photo is the consequence of object motion during the image acquisition. It results in a visible trail along the motion of a recorded object and can be used by photographers to convey a sense of motion. Nevertheless, it is very challenging to acquire this effect as intended and requires much experience from the photographer. To achieve actual control over the motion blur, one could be added in a post process but current solutions require complex manual intervention and can lead to artifacts that mix moving and static objects incorrectly. In this paper, we propose a novel method to add motion blur to a single image that generates the illusion of a photographed motion. Relying on a minimal user input, a filtering process is employed to produce a virtual motion effect. It carefully handles object boundaries to avoid artifacts produced by standard filtering methods. We illustrate the effectiveness of our solution with various complex examples, including multi-directional blur, reflections, multiple objects, and illustrate how several motion-related artistic effects can be achieved. Our post-processing solution is an alternative to capturing the intended real-world motion blur directly and enables fine-grained control of the motion-blur effect.
Texture is a key characteristic in the definition of the physical appearance of an object and a crucial element in the creation process of 3D artists. However, retrieving a texture that matches an intended look from an image collection is difficult. Contrary to most photo collections, for which object recognition has proven quite useful, syntactic descriptions of texture characteristics is not straightforward, and even creating appropriate metadata is a very difficult task. In this paper, we propose a system to help explore large unlabeled collections of texture images. The key insight is that spatially grouping textures sharing similar features can simplify navigation. Our system uses a pre‐trained convolutional neural network to extract high‐level semantic image features, which are then mapped to a 2‐dimensional location using an adaptation of t‐SNE, a dimensionality‐reduction technique. We describe an interface to visualize and explore the resulting distribution and provide a series of enhanced navigation tools, our prioritized t‐SNE, scalable clustering, and multi‐resolution embedding, to further facilitate exploration and retrieval tasks. Finally, we also present the results of a user evaluation that demonstrates the effectiveness of our solution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.