a) (b) Figure 1: One example for accommodation-convergence conflict minimization. (through red and blue film glasses):(a)Original.(b)Result. AbstractThe purpose of this paper is to optimize the stereoscopic 3D experience when users watch those contents. We propose two factors in stereoscopic experience: visual fatigue and depth perception. In order to optimize stereoscopic experience, we propose five principles to reduce visual fatigue and enhance depth perception and implement them by cropping and warping. Using our methods, timeconsuming problems of existing view interpolation methods such as camera calibration, accurate dense depth map, and inpainting are avoided. We also design a GUI that enables users to efficiently edit the stereoscopic video, optimize the stereoscopic experience, and preview the stereoscopic result. The user study shows that our method is successful in optimizing stereoscopic experience. One example is shown in Figure 1. Stereoscopic 3D experienceOur methods to optimize stereoscopic 3D experience follow five principles of human visual attention, which are supported by psychological evidence [Teittinen ]. The five basic principles can be classified into two main factors: visual fatigue reduction and depth perception enhancement.
Abstract-Character speech animation is traditionally considered as important but tedious work, especially when taking lip synchronization (lip-sync) into consideration. Although there are some methods proposed to ease the burden on artists to create facial and speech animation, almost none are fast and efficient. In this paper, we introduce a framework for synthesizing lip-sync character speech animation in real time from a given speech sequence and its corresponding texts. Starting from training dominated animeme models for each kind of phoneme by learning the character's animation control signal through an EM-style optimization approach. The dominated animeme models are further decomposed to polynomial-fitted animeme models and corresponding dominance functions while taking coarticulation into account. Finally, given a novel speech sequence and its corresponding texts, the animation control signal of the character can be synthesized in real time with the trained dominated animeme models. The synthesized lip-sync animation can even preserve exaggerated characteristics of the character's facial geometry. Moreover, since our method can perform in real time, it can be used for many applications, such as lip-sync animation prototyping, multilingual animation reproduction, avatar speech, mass animation production, etc. Furthermore, the synthesized animation control signal can still be imported into 3D packages for further adjustment, so our method can be easily integrated into existing production pipeline.
One of the holy grails of computer graphics is the generation of photorealistic images with motion data. To re-generate convincing human animations might not be the most challenging part, but it is definitely one of ultimate goals for computer graphics. Amongst full-body human animations, facial animation is the challenging part because of its subtlety and familarity to human beings.In this paper, we like to share the work of lip-sync animation, part of facial animations, as a framework for synthesizing lip-sync character speech animation in real time from a given speech sequence and its corresponding texts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.