We describe a method for eye pupil localization based on an ensemble of randomized regression trees and use several publicly available datasets for its quantitative and qualitative evaluation. The method compares well with reported state-of-the-art and runs in real-time on hardware with limited processing power, such as mobile devices.
Abstract:Nonverbal communication is an ~mportant aspect of real-life face-to-face interaction and one of the most efficient ways to convey emotions, therefore users should be provided the means to replicate it in the virtual world. Because articulated embodiments are well suited to provide body communication in virtual environments, this paper first reviews some of the advantages and disadvantages of complex embodiments. After a brief introduction to nonverbal communication theories, we present our solution, taking into account the practical limitations of input devices and social science aspects. We introduce our sample of actions and implementation using our VLNET (Virtual Life Network) networked virtual environment and discuss the results of an informal evaluation experiment.
Animated virtual human characters are a common feature in interactive graphical applications, such as computer and video games, online virtual worlds and simulations. Due to dynamic nature of such applications, character animation must be responsive and controllable in addition to looking as realistic and natural as possible. Though procedural and physics-based animation provide a great amount of control over motion, they still look too unnatural to be of use in all but a few specific scenarios, which is why interactive applications nowadays still rely mainly on recorded and hand-crafted motion clips. The challenge faced by animation system designers is to dynamically synthesize new, controllable motion by concatenating short motion segments into sequences of different actions or by parametrically blending clips that correspond to different variants of the same logical action. In this article, we provide an overview of research in the field of example-based motion synthesis for interactive applications. We present methods for automated creation of supporting data structures for motion synthesis and describe how they can be employed at run-time to generate motion that accurately accomplishes tasks specified by the AI or human user.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.