Abstract:Nonverbal communication is an ~mportant aspect of real-life face-to-face interaction and one of the most efficient ways to convey emotions, therefore users should be provided the means to replicate it in the virtual world. Because articulated embodiments are well suited to provide body communication in virtual environments, this paper first reviews some of the advantages and disadvantages of complex embodiments. After a brief introduction to nonverbal communication theories, we present our solution, taking into account the practical limitations of input devices and social science aspects. We introduce our sample of actions and implementation using our VLNET (Virtual Life Network) networked virtual environment and discuss the results of an informal evaluation experiment.
This paper presents a crowd modelling method in Collaborative Virtual Environment (0%) which aims to create a sense of group presence to provide a more realistic virtual world. An adaptive display is also presented as a key element to optimise the needed information to keep an acceptable frame rate during crowd visualisation. This system has been integrated in the several CVE platforms which will be presented at the end of this paper.
Motion capture is an increasingly popular animation technique; however data acquired by motion capture can become substantial. This makes it difficult to use motion capture data in a number of applications, such as motion editing, motion understanding, automatic motion summarization, motion thumbnail generation, or motion database search and retrieval. To overcome this limitation, we propose an automatic approach to extract keyframes from a motion capture sequence. We treat the input sequence as motion curves, and obtain the most salient parts of these curves using a new proposed metric, called 'motion saliency'. We select the curves to be analysed by a dimension reduction technique, Principal Component Analysis (PCA). We then apply frame reduction techniques to extract the most important frames as keyframes of the motion. With this approach, around 8% of the frames are selected to be keyframes for motion capture sequences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.