Virtual humans have been the focus of computer graphics research for several years now. The amalgamation of computer graphics and artificial intelligence has lead to the possibility of creating believable virtual personalities. The focus has shifted from modeling and animation towards imparting personalities to virtual humans. The aim is to create virtual humans that can interact spontaneously using a natural language, emotions and gestures. This paper discusses a system that allows the design of personality for emotional virtual human. We adopt the Five Factor Model (FFM) of personality from psychology studies. To realize the model, we use Bayesian Belief Network. We introduce a layered approach for modeling personality, moods and emotions. In order to demonstrate a virtual human with emotional personality, we integrate the system into a chat application. Thus, the system enables the developer to design and implement personalities and enables the user to interact with them.
This paper describes a generic model for personality, mood and emotion simulation for conversational virtual humans. We present a generic model for updating the parameters related to emotional behaviour, as well as a linear implementation of the generic update mechanisms. We explore how existing theories for appraisal can be integrated into the framework. Then we describe a prototype system that uses the described models in combination with a dialogue system and a talking head with synchronized speech and facial expressions.
Visemes are visual counterpart of phonemes. Traditionally, the speech animation of 3D synthetic faces involvesextraction of visemes from input speech followed by the application of co‐articulation rules to generate realisticanimation. In this paper, we take a novel approach for speech animation — using visyllables, the visual counterpartof syllables. The approach results into a concatenative visyllable based speech animation system. The key contributionof this paper lies in two main areas. Firstly, we define a set of visyllable units for spoken English along withthe associated phonological rules for valid syllables. Based on these rules, we have implemented a syllabificationalgorithm that allows segmentation of a given phoneme stream into syllables and subsequently visyllables. Secondly,we have recorded the database of visyllables using a facial motion capture system. The recorded visyllableunits are post‐processed semi‐automatically to ensure continuity at the vowel boundaries of the visyllables. We defineeach visyllable in terms of the Facial Movement Parameters (FMP). The FMPs are obtained as a result of thestatistical analysis of the facial motion capture data. The FMPs allow a compact representation of the visyllables.Further, the FMPs also facilitate the formulation of rules for boundary matching and smoothing after concatenatingthe visyllables units. Ours is the first visyllable based speech animation system. The proposed technique iseasy to implement, effective for real‐time as well as non real‐time applications and results into realistic speechanimation. Categories and Subject Descriptors (according to ACM CCS): 1.3.7 [Computer Graphics]: Three‐Dimensional Graphics and Realism
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.