Abstract-Excitation-continuous music instrument control patterns are often not explicitly represented in current sound synthesis techniques when applied to automatic performance. Both physical model-based and sample-based synthesis paradigms would benefit from a flexible and accurate instrument control model, enabling the improvement of naturalness and realism. We present a framework for modeling bowing control parameters in violin performance. Nearly non-intrusive sensing techniques allow for accurate acquisition of relevant timbre-related bowing control parameter signals. We model the temporal contour of bow velocity, bow pressing force, and bow-bridge distance as sequences of short Bézier cubic curve segments. Considering different articulations, dynamics, and performance contexts, a number of note classes are defined. Contours of bowing parameters in a performance database are analyzed at note-level by following a predefined grammar that dictates characteristics of curve segment sequences for each of the classes in consideration. As a result, contour analysis of bowing parameters of each note yields an optimal representation vector that is sufficient for reconstructing original contours with significant fidelity. From the resulting representation vectors, we construct a statistical model based on Gaussian mixtures suitable for both the analysis and synthesis of bowing parameter contours. By using the estimated models, synthetic contours can be generated through a bow planning algorithm able to reproduce possible constraints caused by the finite length of the bow. Rendered contours are successfully used in two preliminary synthesis frameworks: digital waveguide-based bowed string physical modeling and sample-based spectral-domain synthesis.
The new transmission and storage technologies now available have put together a vast amount of digital audio. All this audio is ready and easy to transfer but it might be useless with a clear knowledge of its content as metadata attached to it. This knowledge can be manually added but this is not feasible for millions of on-line files. In this paper we present a method to automatically derive acoustic information about audio files and a technology to classify and retrieve audio examples.
Audio fingerprinting technologies allow the identification of audio content without the need of an extemal metadata or watermark embedding. These audio fingerprinting technologies work by extracting a content-based compact digest that summarizes a recording and comparing them wifh a previously extractedfingerprint database. In this paper: we present afingerprint scheme that is based on Hidden Markov Models. This approach achieves a high compaction of the audio signal by exploiting structural redundancies on music and robustness to distorfions thanks to the stochastic modeling. In this paper we present the basic functionality of the system as well as some results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.