Studies on the perception of music qualities (such as induced or perceived emotions, performance styles, or timbre nuances) make a large use of verbal descriptors. Although many authors noted that particular music qualities can hardly be described by means of verbal labels, few studies have tried alternatives. This paper aims at exploring the use of non-verbal sensory scales, in order to represent different perceived qualities in Western classical music. Musically trained and untrained listeners were required to listen to six musical excerpts in major key and to evaluate them from a sensorial and semantic point of view (Experiment 1). The same design (Experiment 2) was conducted using musically trained and untrained listeners who were required to listen to six musical excerpts in minor key. The overall findings indicate that subjects’ ratings on non-verbal sensory scales are consistent throughout and the results support the hypothesis that sensory scales can convey some specific sensations that cannot be described verbally, offering interesting insights to deepen our knowledge on the relationship between music and other sensorial experiences. Such research can foster interesting applications in the field of music information retrieval and timbre spaces explorations together with experiments applied to different musical cultures and contexts
This paper presents a methodology for the preservation of audio documents, the operational protocol that acts as the methodology, and an original open source software system that supports and automatizes several tasks along the process. The methodology is presented in the light of the ethical debate that has been challenging the international archival community for the last thirty years. The operational protocol reflects the methodological principles adopted by the authors, and its effectiveness is based on the results obtained in recent research projects involving some of the finest audio archives in Europe. Some recommendations are given for the rerecording process, aimed at minimizing the information loss and at quantifying the unintentional alterations introduced by the technical equipment. Finally, the paper introduces an original software system that guides and supports the preservation staff along the process, reducing the processing timing, automatizing tasks, minimizing errors, and using information hiding strategies to ease the cognitive load. Currently the software system is in use in several international archives.
The important role of the valence and arousal dimensions in representing and recognizing affective qualities in music is well established. There is less evidence for the contribution of secondary dimensions such as potency, tension and energy. In particular, previous studies failed to find significant relations between computable musical features and affective dimensions other than valence and arousal. Here we present two experiments aiming at assessing how musical features, directly computable from complex audio excerpts, are related to secondary emotion dimensions. To this aim, we imposed some constraints on the musical features, namely modality and tempo, of the stimuli.The results show that although arousal and valence dominate for many musical features, it is possible to identify features, in particular Roughness, Loudness, and SpectralFlux, that are significantly related to the potency dimension. As far as we know, this is the first study that gained more insight into the affective potency in the music domain by using real music recordings and a computational approach
Expression is an important aspect of music performance. It is the added value of a performance and is part of the reason that music is interesting to listen to and sounds alive. Understanding and modeling expressive content communication is important for many engineering applications in information technology. For example, in multimedia products, textual information is enriched by means of graphical and audio objects. In this paper, we present an original approach to modify the expressive content of a performance in a gradual way, both at the symbolic and signal levels. To this purpose, we discuss a model that applies a smooth morphing among performances with different expressive content, adapting the audio expressive character to the user’s desires. Morphing can be real- ized with a wide range of graduality (from abrupt to very smooth), allowing adaptation of the system to different situations. The sound rendering is obtained by interfacing the expressiveness model with a dedicated postprocessing environment, which allows for the transformation of the event cues. The processing is based on the organized control of basic audio effects. Among the basic effects used, an original method for the spectral processing of audio is introduced
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.