Figure 1: A self-refining liquid control game uses player analytics to guide precomputation to the most visited regions of the liquid's state space. The game's quality continuously improves over time, ultimately providing a high-quality, interactive experience. AbstractData-driven simulation demands good training data drawn from a vast space of possible simulations. While fully sampling these large spaces is infeasible, we observe that in practical applications, such as gameplay, users explore only a vanishingly small subset of the dynamical state space. In this paper we present a sampling approach that takes advantage of this observation by concentrating precomputation around the states that users are most likely to encounter. We demonstrate our technique in a prototype self-refining game whose dynamics improve with play, ultimately providing realistically rendered, rich fluid dynamics in real time on a mobile device. Our results show that our analytics-driven training approach yields lower model error and fewer visual artifacts than a heuristic training strategy.
a) Interface (b) Wine Glass (c) Champagne (d) Chess (e) Cigarette (f) Robot claw Figure 1: Interactively creating movements of animated hands using our interface. (a) A typical desktop setup, including haptic interfaces and stereo display. (b,c) Grasps conform to object shapes using our adaption method. (d,e) Dynamic interaction with grasped objects and the environment. (f) Generalization: we can easily control a non-anthropomorphic robotic hand with the same interface. The example also shows stacking virtual objects with compliant contact. AbstractHumans show effortless dexterity while manipulating objects using their own hands. However, specifying the motion of a virtual character's hand or of a robotic manipulator remains a difficult task that requires animation expertise or extensive periods of offline motion capture. We present Hands On: a real-time, adaptive animation interface, driven by compliant contact and force information, for animating contact and precision manipulations of virtual objects. Using our interface, an animator controls an abstract grasper trajectory while the full hand pose is automatically shaped by proactive adaptation and compliant scene interactions. Haptic force feedback enables intuitive control by mapping interaction forces from the full animated hand back to the reduced animator feedback space, invoking the same human sensorimotor processes utilized in natural precision manipulations. We provide an approach for online, adaptive shaping of the animated manipulator based on prior interactions, resulting in more functional and appealing motions. The importance of haptic feedback for authoring virtual object manipulations is verified in a user study with nonexpert participants that examines contact force trajectories while using our interface. Comparing the quality of motions produced with and without force rendering, haptic feedback is shown to be critical for efficiently communicating contact forces and dynamic events to the user.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.