International audienceThis paper presents the design and the control of a spatialized additive synthesizer aiming at simulating environmental sounds. First the synthesis engine, based on a combination of an additive signal model and spatialization processes, is presented. Then, the control of the synthesizer, based on a hierarchical organization of sounds, is discussed. Complex environmental sounds (such as a water flow or a fire) may then be designed thanks to an adequate combination of a limited number of basic sounds consisting in elementary signals (impacts, chirps, noises). The mapping between parameters describing these basic sounds and high-level descriptors describing an environmental auditory scene is finally presented in the case of a rainy sound ambiance
Abstract. In this paper, an overview of the stochastic modeling for analysis/synthesis of noisy sounds is presented. In particular, we focused on the time-frequency domain synthesis based on the inverse fast Fourier transform (IFFT) algorithm from which we proposed the design of a spatialized synthesizer. The originality of this synthesizer remains in its one-stage architecture that efficiently combines the synthesis with 3D audio techniques at the same level of sound generation. This architecture also allowed including a control of the source width rendering to reproduce naturally diffused environments. The proposed approach led to perceptually realistic 3D immersive auditory scenes. Applications of this synthesizer are here presented in the case of noisy environmental sounds such as air swishing, sea wave or wind sound.We finally discuss the limitations but also the possibilities offered by the synthesizer to achieve sound transformations based on the analysis of recorded sounds.
In virtual auditory environments, a spatialized sound source is typically simulated in two stages: first a "dry" monophonic signal is recorded or synthesized, and then spatial attributes (directivity, width and position) are applied by specific signal processing algorithms. In this paper, a unified analysis/spatialization/synthesis system is presented. It is based on the spectral modeling framework that analyses/synthesizes sounds as a combination of time-varying sinusoidal, noisy and transient contributions. The proposed system takes advantage of this representation to allow intrinsic parametric sound transformations, such as spatial distribution of sinusoids or diffusion of the noisy contribution around the listener. It integrates timbre and spatial parameters at the same level of sound generation, so as to enhance control capability and computational performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.