This paper studies users' learning perception in a virtual titration experiment with differentiated instruction. We develop a virtual reality chemistry lab and use leap motion to detect users' hand gestures for operations. Users wear a head‐mounted display and use their bare hands to interact with virtual objects to perform a titration experiment. Our system implements a complete titration process and provides assistance tools for learning and operating virtual items. We report the essential ideas for building the system. We applied differentiated instruction to study the learning effectiveness of users under different learning intensities. Two groups of students with and without chemistry background participated in a user study. Our results indicated that the virtual reality chemistry lab could enhance and promote users' learning confidence under suitable learning intensities.
This paper proposes a framework that integrates reinforcement learning and blend-trees to generate animation of multiple agents for object transportation. The main idea is that in the learning stage, policies are learned to control agents to perform specific skills, including navigation, pushing, and orientation adjustment. The policies determine the blending parameters of the blend-trees to achieve locomotion control of the agents. In the simulation stage, the policies are combined to control the agents to navigate, push objects, and adjust orientation of the objects. We demonstrated several examples to show that the framework is capable of generating animation of multiple agents in different scenarios.
In interactive virtual environments and dynamic simulations, collisions between complex objects and articulated b odies may occur simultaneously at multiple points or regions of interference. Many solutions to the collision response problem are formulated b ased on the local pair{wise contact dynamics. In this article, we present a new solution to the global interactions and dynamic response between multiple structures in a three-dimensional environment. This is based on a new dynamic impulse graph that tracks the reaction forces through the entire system and gives a global view of all the interactions in a multibody system.
In this article, we develop a system with a natural language interface to generate animation of small groups in environments with ambient crowds. Our system takes simple sentences of English as input and these sentences are parsed to obtain information about character attributes, behaviors, and locations for constructing situation nodes. These situation nodes form an animation graph for producing an animation of small groups. The interface assists in interactively managing and editing the animation graph. We demonstrate the effectiveness of the system in some examples. The user study results indicate that the proposed system is user-friendly and flexible in producing animation of small groups in various scenes. K E Y W O R D S crowd animation, interactive interface, natural language processing, small groups 1 INTRODUCTION Crowds are composed of individual characters and small groups (e.g., families, friends) which exhibit rich behaviors, e.g., watching a performance, wandering, and dancing, that are common in various situations, e.g., parks, subway stations, streets, markets, and exhibition halls. Graphical authoring tools and behavior trees have been adopted in producing animations with character models. 1,2 Natural language can be treated as a kind of input medium for constructing virtual objects and animations. 3-5 A story maker tool incorporates natural language processing (NLP) and graphics to generate 2D animated scenes. 3 The components of a scene, including actors, actions, objects, and locations, are determined based on input sentences. A story telling system 5 adopts a module, called subject-predicate-object (SPO), 6 to extract the subject, predicate and object in an input sentence. The aforementioned systems mainly focus on one or a few characters, which do not handle groups, crowd behaviors, and groups of ambient crowds. In this article, we implement a system with a natural language interface and crowd simulation techniques to animate groups (Figure 1). We extract group-related information, such as characters, group behaviors, group targets, and locations from the input sentences. Situation nodes are constructed to form an entire animation. In this study, we take input as simple sentences of English. A user study was conducted to evaluate the performance and utility of our system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.