Where is the beat in that note? Effects of attack, duration, and frequency on the perceived timing of musical and quasi-musical sounds AbstractWhen coordinating physical actions with sounds, we synchronise our actions with the perceptual center (P-center) of the sound, understood as the specific moment at which the sound is perceived to occur. Using matched sets of real and artificial musical sounds as stimuli, we probed the influence of Attack (rise time), Duration, and Frequency (center frequency) on perceived P-center location and P-center variability. Two different methods to determine the P-centers were used:Clicks aligned in-phase with the target sounds via the method of adjustment, and tapping in synchrony with the target sounds. We found that attack and duration are primary cues for Pcenter location and P-center variability, and that the latter is a useful measure of P-center shape.Probability density distributions for each stimulus display a systematic pattern of P-center shapes ranging from narrow peaks close to the onset of sounds with a fast attack and short duration, to wider and flatter shapes indicating a range synchronization points for sounds with a slow attack and long duration. The results support the conception of P-centers as not simple time points, but "beat bins" with characteristic shapes, and the shapes and locations of these beat bins are dependent upon both the stimulus and the synchronization task. Public significance statementIn music and dance, as well as many other contexts, we coordinate our physical actions with sounds. Our research shows how the fine-grained details of a sound interact in our temporal perception of it. This has implications for a wide range of applications that involve timing, from rehearsing musical ensembles to the sonification of complex patterns of information.
Links between music and body motion can be studied through experiments called sound-tracing. One of the main challenges in such research is to develop robust analysis techniques that are able to deal with the multidimensional data that musical sound and body motion present. The article evaluates four different analysis methods applied to an experiment in which participants moved their hands following perceptual features of short sound objects. Motion capture data has been analyzed and correlated with a set of quantitative sound features using four different methods: (a) a pattern recognition classifier, (b) t-tests, (c) Spearman's ρ correlation, and (d) canonical correlation. This article shows how the analysis methods complement each other, and that applying several analysis techniques to the same data set can broaden the knowledge gained from the experiment.
In our own and other research on music-related actions, findings suggest that perceived action and sound are broken down into a series of chunks in people's minds when they perceive or imagine music. Chunks are here understood as holistically conceived and perceived fragments of action and sound, typically with durations in the 0.5 to 5 seconds range. There is also evidence suggesting the occurrence of coarticulation within these chunks, meaning the fusion of small-scale actions and sounds into more superordinate actions and sounds. Various aspects of chunking and coarticulation are discussed in view of their role in the production and perception of music, and it is suggested that coarticulation is an integral element of music and should be more extensively explored in the future.
This thesis investigates the expressive means through which musicians well versed in groove-based music shape the timing of a rhythmic event, with a focus on the interaction between produced timing and sound features. In three performance experiments with guitarists, bassists, and drummers, I tested whether musicians systematically manipulate acoustic factors such as duration, intensity, and volume when they want to play with a specific microrhythmic style (pushed, on-the-beat, or laidback).The results show that all three groups of instrumentalists indeed played pushed, on-the-beat, or laid-back relative to the reference pulse and in line with the instructed microrhythmic styles, and that there were systematic and consequential sound differences. Guitarists played backbeats with a longer duration and darker sound in relation to pushed and laid-back strokes. Bassists played pushed beats with higher intensity than on-the-beat and laid-back strokes. For the drummers, we uncovered different timing-sound combinations, including the use of longer duration (snare drum) and higher intensity (snare drum and hi-hat), to distinguish both laid-back and pushed from on-the-beat strokes. The metronome as a reference pulse led to less marked timing profiles than the use of instruments as a reference, and it led in general to earlier onset positions as well, which can perhaps be related to the phenomenon of "negative mean asynchrony." We also conducted an in-depth study of the individual drummers' onset and intensity profiles using hierarchical cluster analyses and phylogenetic tree visualizations and uncovered a diverse range of strategies.The results support the research hypothesis that both temporal and sound-related properties contribute to how we perceive the location of a rhythmic event in time. I discuss these results in light of theories and findings from other studies of the perception and performance of groove, as well as research into rhythm and microrhythmic phenomena such as perceptual centers and onset asynchrony/anisochrony. This thesis was borne of blood, sweat, and tears, as well as a whole lotta love. (Musical puns intended, though mainly for the in crowd [Oops, I did it again].)First of all, I would like to thank my supervisors, Anne Danielsen and Kristian Nymoen, for their invaluable guidance and support throughout this journey. Anne, thank you first and foremost for providing the opportunity to research groove music in a scholarly, scientific manner for all these years -the combination of praxis and theory have bolstered its awesome power for me many times over. Your shrewd mentorship, unceasing kindness and patience, and limitless passion for knowledge have been a constant source of learning and inspiration. It is an honor to work alongside such a juggernaut scholar (and fellow funk head!), and I hope to continue unraveling the mysteries of groove together with you. Kristian, thank you for painlessly leading me through a new and wonderful technological path, one that has opened up so many analytical possibil...
In speech and music, the acoustic and perceptual onset(s) of a sound are usually not congruent with its perceived temporal location. Rather, these "P-centers" are heard some milliseconds after the acoustic onset, and a variety of techniques have been used in speech and music research to find them. Here we report on a comparative study that uses various forms of the method of adjustment (aligning a click or filtered noise in-phase or anti-phase to a repeated target sound), as well as tapping in synchrony with a repeated target sound. The advantages and disadvantages of each method and probe type are discussed, and then all methods are tested using a set of musical instrument sounds that systematically vary in terms of onset/rise time (fast vs. slow), duration (short vs. long), and center frequency (high vs. low). For each method, the dependent variables were (a) the mean Pcenter location found for each stimulus type, and (b) the variability of the mean P-center location found for each stimulus type. Interactions between methods and stimulus categories were also assessed. We show that (a) in-phase and anti-phase methods of adjustment produce nearly identical results, (b) tapping vs. click alignment can provide different yet useful information regarding P-center locations, (c) the method of adjustment is sensitive to different sounds in terms of variability while tapping is not, and (d) using filtered noise as an alignment probe yields consistently earlier probe-onset locations in comparison to using a click as a probe.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.