Figure 1: Our data-driven approach to synthesizing cloth sounds is able to produce soundtracks for a wide range of common cloth animation scenarios. In this example, the familiar sounds of a windbreaker are synthesized as the character shadow boxes.
AbstractWe present a practical data-driven method for automatically synthesizing plausible soundtracks for physics-based cloth animations running at graphics rates. Given a cloth animation, we analyze the deformations and use motion events to drive crumpling and friction sound models estimated from cloth measurements. We synthesize a low-quality sound signal, which is then used as a target signal for a concatenative sound synthesis (CSS) process. CSS selects a sequence of microsound units, very short segments, from a database of recorded cloth sounds, which best match the synthesized target sound in a low-dimensional feature-space after applying a handtuned warping function. The selected microsound units are concatenated together to produce the final cloth sound with minimal filtering. Our approach avoids expensive physics-based synthesis of cloth sound, instead relying on cloth recordings and our motiondriven CSS approach for realism. We demonstrate its effectiveness on a variety of cloth animations involving various materials and character motions, including first-person virtual clothing with binaural sound.