Sounds are often the result of motions of virtual objects in a virtual environment. Therefore, sounds and the motions that caused them should be treated in an integrated way. When sounds and motions do not have the proper correspondence, the resultant confusion can lessen the effects of each. In this paper, we present an integrated system for modeling, synchronizing, and rendering sounds for virtual environments. The key idea of the system is the use of a functional representation of sounds, called timbre trees. This representation is used to model sounds that are parameterizable. These parameters can then be mapped to the parameters associated with the motions of objects in the environment. This mapping allows the correspondence of motions and sounds in the environment. Representing arbitrary sounds using timbre trees is a difficult process that we do not address in this paper. We describe approaches for creating some timbre trees including the use of genetic algorithms. Rendering the sounds in an aural environment is achieved by attaching special environmental nodes that represent the attenuation and delay as well as the listener effects to the timbre trees. These trees are then evaluated to generate the sounds. The system that we describe runs parallel in real time on an eight-processor SGI Onyx. We see the main contribution of the present system as a conceptual framework on which to consider the sound and motion in an integrated virtual environment.
In this paper, we describe a novel method to automatically generate synchronized dance motion that is perceptually matched to a given musical piece. The proposed method extracts 30 musical features from musical data as well as 37 motion features from motion data. A matching process is then performed between the two feature spaces considering the correspondence of the relative changes in both feature spaces and the correlations between musical and motion features. Similarity matrices are introduced to match the amount of relative changes in both feature spaces and correlation coefficients are used to establish the correlations between musical features and motion features by measuring the strength of correlation between each pair of the musical and motion features. By doing this, the progressions of musical and dance motion patterns and perceptual changes between two consecutive musical and motion segments are matched. To demonstrate the effectiveness of the proposed approach, we designed and carried out a user opinion study to assess the perceived quality of the proposed approach. The statistical analysis of the user study results showed that the proposed approach generated results that were significantly better than those produced using a random walk through the dance motion database.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.