This paper presents a bimodal acoustic-visual synthesis technique that concurrently generates the acoustic speech signal and a 3D animation of the speaker's outer face. This is done by concatenating bimodal diphone units that consist of both acoustic and visual information. In the visual domain, we mainly focus on the dynamics of the face rather than on rendering. The proposed technique overcomes the problems of asynchrony and incoherence inherent in classic approaches to audiovisual synthesis. The different synthesis steps are similar to typical concatenative speech synthesis but are generalized to the acoustic-visual domain. The bimodal synthesis was evaluated using perceptual and subjective evaluations. The overall outcome of the evaluation indicates that the proposed bimodal acoustic-visual synthesis technique provides intelligible speech in both acoustic and visual channels.
This paper presents preliminary work on building a system able to synthesize concurrently the speech signal and a 3D animation of the speaker's face. This is done by concatenating bimodal diphone units, that is, units that comprise both acoustic and visual information. The latter is acquired using a stereovision technique. The proposed method addresses the problems of asynchrony and incoherence inherent in classic approaches to audiovisual synthesis. Unit selection is based on classic target and join costs from acoustic-only synthesis, which are augmented with a visual join cost. Preliminary results indicate the benefits of the approach, since both the synthesized speech signal and the face animation are of good quality. Planned improvements and enhancements to the system are outlined.
In this paper we propose a purely geometric approach to establish correspondence between 3D line segments in a given model and 2D line segments detected in an image. Contrary to the existing methods which use strong assumptions on camera pose, we perform exhaustive search in order to compute maximum number of geometrically permitted correspondences between a 3D model and 2D lines. We present a novel theoretical framework in which we sample the space of camera axis direction (which is bounded and hence can be densely sampled unlike the unbounded space of camera position) and show that geometric constraints arising from it reduce rest of the computation to simple operations of finding camera position as the intersection of 3 planes. These geometric constraints can be represented using indexed arrays which accelerate it further. The algorithm returns all sets of correspondences and associated camera poses having high geometric consensus. The obtained experimental results show that our method has better asymptotic behavior than conventional approach. We also show that with the inclusion of additional sensor information our method can be used to initialize pose in just few seconds in many practical situations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.