Perceptual dimensions underlying timbre and sound-source identification have received considerable scientific attention. While these scholarly insights help us in understanding the nature of sound within a multidimensional timbral space, they carry little meaning for the majority of musicians. To help address this, we conducted two experiments to establish listeners’ perceptual thresholds (PT) for changes in sound using a staircase-procedure. Unlike most timbre perception research, these changes were sonic manipulations that are common in synthesisers, audio processors and instruments familiar to musicians and producers, and occurred within continuous sounds (rather than between discrete pairs of sounds). In experiment 1, two sounds (variants of a sawtooth oscillation) both with the same fundamental frequency (F1: 80 Hz, 240 Hz or 600 Hz) were played with no intervening gap. In each trial, the two sounds’ partials differed in amplitudes or frequencies to produce a timbre change. The sonic manipulations were varied in size to detect thresholds for the perceived timbre change – listeners were instructed to indicate whether or not they perceived a change within the sound. In experiment 2, we modified stimulus presentation to introduce the factor of transition time (TT). Rather than occurring instantaneously (as in experiment 1), the timbre manipulations were introduced gradually over the course of a 100 ms or a 1000 ms TT. Results revealed that PTs were significantly affected by the manipulations in experiment 1, and additionally by TT in experiment 2. Importantly, the data revealed an interaction between the F1 and the timbre manipulations, such that there were differential effects of timbre changes on the perceptual system depending on pitch height. Musicians (n=11) showed significantly smaller PTs compared to non-musicians (n=10). However, PTs for musicians and non-musicians were highly correlated (r=.83) across different sonic manipulations, indicating similar perceptual patterns in both. We hope that by establishing PTs for commonly used timbre manipulations, we can provide musicians with a general perceptual unit, for each manipulation, that can guide music composition and assessment.
Harmonic cadences are chord progressions that play an important structural role in Western classical music – they demarcate musical phrases and contribute to the tonality. This study examines participants’ ratings of the perceived arousal and valence of a variety of harmonic cadences. Manipulations included the type of cadence (authentic, plagal, half, and deceptive), its mode (major or minor), its average pitch height (the transposition of the cadence), the presence of a single tetrad (a dissonant four-tone chord), and the mode (major or minor) of the cadence’s final chord. With the exception of average pitch height, the manipulations had only small effects on arousal. However, the perceived valence of major cadences was substantially higher than for minor cadences, and average pitch had a medium-sized positive effect. Plagal cadences, the inclusion of a tetrad, and ending on a minor chord all had weak negative effects for valence. The present findings are discussed in light of contemporary music theory and music psychology, as knowledge of how specific acoustic components and musical structures impact emotion perception in music is important for performance practice, and music-based therapies.
Mixing music is a highly complex task. This is exacerbated by the fact that timbre perception is still poorly understood. As a result, few studies have been able to pinpoint listeners’ preferences in terms of timbre. In order to investigate timbre preference in a music production context, we let participants mix multiple individual parts of musical pieces (bassline, harmony, and arpeggio parts, all sounded with a synthesizer) by adjusting four specific timbral attributes of the synthesizer (lowpass, sawtooth/square wave oscillation blend, distortion, and inharmonicity). After participants mixed all parts of a musical piece, they were asked to rate multiple mixes of the same musical piece. Listeners showed preferences for their own mixes over random, fixed sawtooth, or expert mixes. However, participants were unable to identify their own mixes. Despite not being able to accurately identify their own mixes, participants consistently preferred the mix they thought to be their own, regardless of whether or not this mix was indeed their own. Correlations and cluster analysis of the participants’ mixing settings show most participants behaving independently in their mixing approaches and one moderate sized cluster of participants who are actually rather similar. In reference to the starting-settings, participants applied the biggest changes to the sound with the inharmonicity manipulation (measured in the perceptual distance) despite often mentioning that they do not find this manipulation particularly useful. The results show that listeners have a consistent, yet individual timbre preference and are able to reliably evoke changes in timbre towards their own preferences.
Perceived relationships between timbres are critical in electroacoustic music. Most studies assume timbres have fixed inter-relationships, but we tested whether distinct tasks change these. Thirty short sounds were used, from five categories: acoustic instruments, impulse responses, convolutions of the preceding, environmental sounds and computer-manipulated instrumental sounds. In Task 1, 46 non-musicians formed a ‘cohesive’ sonic ordering of unlabelled icons (sounds attached). In Task 2, they categorised the icons into four boxes. In Task 3 listeners separately ordered the sounds from each of Task 2’s boxes using the approach of Task 1. Tasks 1 and 2/3 revealed distinct orderings, consistent with conceptual flexibility. To analyse the orderings, we replaced conventional distance by adjacency measures, and described each system as a network (rather than spatial positions), confirming that the two task outcomes were distinct. Network analyses also showed that the two systems were mechanistically distinct and allowed us to predict temporally changing networks, modelling the observed networks as successive perceptions. Further simulated networks generated with the temporal model readily encompassed all possible pairings between the sounds and not just those we observed. The temporal network model thus confirms conceptual flexibility even in untrained listeners, clearly suitable for a composer to use.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.