The participatory design approach used in developing the IMAP was fundamental in ensuring its relevance, and regular feedback from end users in each phase of development proved valuable for early identification of issues. Observations and feedback from end users supported a holistic approach to music aural rehabilitation.
While much work is proceeding with regard to the preservation and restoration of audio documents in general and compositions for tape in particular, relatively little research has been published with regard to the issues of preserving compositions for live electronics. Such works often involve a distinct performance element difficult to capture in a single recording, and it is typically only in performance that such works can be experienced as the composer intended. However, performances can become difficult or even impossible to present over time due to data and/or equipment issues. Sustainability here therefore refers to the effective recording of all the information necessary to set up the live electronics for a performance. Equally, it refers to the availability of appropriate devices, as rapid technological change soon makes systems obsolete and manufacturers discontinue production. The authors have had a range of experience re-working performances over a number of years, including compositions by Luigi Nono and Jonathan Harvey, amongst others. In this paper we look at the problem as a whole, focusing on Jonathan Harvey's works with electronic elements, which span some twenty-six years, as exemplars of the types of problems involved.
Extracting a singing voice from its music accompaniment can significantly facilitate certain applications of Music Information Retrieval including singer identification and singing melody extraction. In this paper, we present a hybrid approach for this purpose, which combines properties of the Azimuth Discrimination and Resynthesis (ADRess) method with Independent Component Analysis (ICA). Our proposed approach is developed specifically for the case of singing voice separation from stereophonic recordings. The paper presents the characteristics of the proposed method and details an objective evaluation of its effectiveness.
The musical use of realtime digital audio tools implies the need for simultaneous control of a large number of parameters to achieve the desired sonic results. Often it is also necessary to be able to navigate between certain parameter configurations in an easy and intuitive way, rather than to precisely define the evolution of the values for each parameter. Graphical interpolation systems (GIS) provide this level of control by allocating objects within a visual control space to sets of parameters that are to be controlled, and using a moving cursor to change the parameter values according to its current position within the control space. This paper describes Interpolator, a two-dimensional interpolation system for controlling digital signal processing (DSP) parameters in real time.
Original article can be found at: http://journals.cambridge.org/ Copyright Cambridge University PressThis paper presents an overview of a generic task model of music composition, developed as part of a research project investigating methods of improving user-interface designs for music software (in particular focusing on sound synthesis tools). The task model has been produced by applying recently developed task analysis techniques to the complex and creative task of music composition. The model itself describes the purely practical aspects of music composition, avoiding any attempt to include the aesthetic motivations and concerns of composers. We go on to illustrate the application of the task model to software design by describing various parts of Modalyser, a graphical user-interface program designed by the author for creating musical sounds with IRCAM's Modalys physical modelling synthesis software. The task model is not yet complete at all levels and requires further refinement, but is deemed to be sufficiently comprehensive to merit presentation here. Although developed for assisting in software design, the task model may be of wider interest to those concerned with the education of music composition and research into music composition generally. This paper has been developed from a short presentation given at the First Sonic Arts Network Conference in January 1998
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.