DESPITE THE CONSIDERATION THAT musical parallelism is an important factor for musical segmentation, there have been relatively few systematic attempts to describe exactly how it affects grouping processes. The main problem is that musical parallelism itself is difficult to formalize. In this study, a computational model that extracts melodic patterns from a given melodic surface is presented. Following the assumption that the beginning and ending points of "significant" repeating musical patterns influence the segmentation of a musical surface, the discovered patterns are used as a means to determine probable segmentation points of the melody. "Significant" patterns are defined primarily in terms of frequency of occurrence and length of pattern. The special status of nonoverlapping, immediately repeating patterns is examined. All the discovered patterns merge into a single "pattern" segmentation profile that signifies points in the surface most likely to be perceived as points of segmentation. The effectiveness of the proposed melodic representations and algorithms is tested against a series of melodic surfaces illustrating both strengths and weaknesses of the approach.
LISTENERS ARE THOUGHT TO BE CAPABLE of perceiving multiple voices in music. This paper presents different views of what 'voice' means and how the problem of voice separation can be systematically described, with a view to understanding the problem better and developing a systematic description of the cognitive task of segregating voices in music. Well-established perceptual principles of auditory streaming are examined and then tailored to the more specific problem of voice separation in timbrally undifferentiated music. Adopting a perceptual view of musical voice, a computational prototype is developed that splits a musical score (symbolic musical data) into different voices. A single 'voice' may consist of one or more synchronous notes that are perceived as belonging to the same auditory stream. The proposed model is tested against a small dataset that acts as ground truth. The results support the theoretical viewpoint adopted in the paper. FIGURE 11. Number of voices: in terms of literal monophonic voices all existing computational models will determine in the three examples one, two,and three voices, respectively. In terms of harmonic voices, all examples can be understood as comprising three voices (triadic harmony). In terms of perceptual voices/streams, each example is perceived as a single auditory stream (proposed algorithm).
We report three experiments examining the perception of tempo in expressively performed classical piano music. Each experiment investigates beat and tempo perception in a different way: rating the correspondence of a click track to a musical excerpt with which it was simultaneously presented; graphically marking the positions of the beats using an interactive computer program; and tapping in time with the musical excerpts. We examine the relationship between the timing of individual tones, that is, the directly measurable temporal information, and the timing of beats as perceived by listeners. Many computational models of beat tracking assume that beats correspond with the onset of musical tones. We introduce a model, supported by the experimental results, in which the beat times are given by a curve calculated from the tone onset times that is smoother (less irregular) than the tempo curve of the onsets. Tempo and beat are well-defined concepts in the abstract setting of a musical score, but not in the context of analysis of expressive musical performance. That is, the regular pulse, which is the basis of rhythmic notation in common music notation, is anything but regular when the timing of performed notes is measured. These deviations from mechanical timing are an important part of musical expression, although they remain, for the most part, poorly understood. In this study we report on three experiments using one set of musical excerpts, which investigate the characteristics of the relationship between performed timing and perceived local tempo. The experiments address this relationship via the following tasks: rating the correspondence of a click track to a musical excerpt with which it was
In the first part of this article, the notions of identity, similarity, categorization, and feature salience are explored; musical examples are provided at various stages of the discussion. Then, formal working definitions are proposed that inextricably bind these concepts together. These definitions readily lend themselves to the development of a formal model for clustering-the Unscramble algorithm-which, given a set of objects and an initial set of properties, generates a range of plausible categorizations for a given context. Finally, as a test case, the clustering algorithm is used to organize a number of melodic segments, taken from a monophonic piece by J. S. Bach, into motivic categories; the algorithm also determines a prototype for each cluster and uses these prototypical descriptions for membership prediction tasks. The results of the computational system are compared with the empirical results obtained for the same data in two earlier studies (I. Deliège, 1996Deliège, , 1997.O NE significant component of musical understanding is the ability of listeners to cluster musical materials together into categories such as motives, themes, and so on. Salient musical cues enable listeners to make similarity judgments between various musical materials and to organize these into meaningful groups. In this study, the notions of feature salience,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.