The ongoing generation of expectations is fundamental to listeners' experience of music, but research into types of statistical information that listeners extract from musical melodies has tended to emphasize transition probabilities and n-grams, with limited consideration given to other types of statistical learning that may be relevant. Temporal associations between scale degrees represent a different type of information present in musical melodies that can be learned from musical corpora using expectation networks, a computationally simple method based on activation and decay.Expectation networks infer the expectation of encountering one scale degree followed in the near (but not necessarily immediate) future by another given scale degree, with previous work suggesting that scale degree associations learned by expectation networks better predict listener ratings of pitch similarity than transition probabilities. The current work outlines how these learned scale degree associations can be combined to predict melodic continuations and tests the resulting predictions on a dataset of listener responses to a musical cloze task previously used to compare two other models of melodic expectation, a variable-order Markov model (IDyOM) and Temperley's music-theoretically motivated model. Under multinomial logistic regression, all three models explain significant unique variance in human melodic expectations, with coefficient estimates highest for expectation networks.These results suggest that generalized scale degree associations informed by both adjacent and nonadjacent relationships between melodic notes influence listeners' melodic predictions above and beyond n-gram context, highlighting the need to consider a broader range of statistical learning processes that may underlie listeners' expectations for upcoming musical events.
Scales from divergent musical cultures tend to have both intuitive structural similarities and one common functional property: within a given scale, each note takes on a unique shade of meaning in the context of the scale as a whole. It may be that certain structural traits facilitate this functional property—in other words, that scales with particular structural characteristics are more globally integrated and capable of being processed in a top-down manner. Representing pitch collections as bit strings, the current work shows that in Western European, Northern Indian, and Japanese traditional musics, collections that are more densely packed with recursively nested non-overlapping, uniquely identifiable repeated substrings (more hierarchizable) are more likely to appear as scales (p = .002).
Research in tonality perception commonly references the correlation between scale-degree occurrence frequencies and probe-tone ratings of tonal stability. This corpus study compares frequency of occurrence with 3 other statistical cues of tonal emphasis: average scale-egree duration, percentage of scale-degree instances appearing on downbeats, and percentage of scale-degree instances appearing in phrase-final positions. Using a mixed linear model that accounts for membership in the diatonic scale and random effects at the melody level, Experiment 1 finds that all frequency-and non-frequencybased measures of tonal emphasis except duration are highly significant predictors of the tonal hierarchy, with scale membership explaining the most unique variance, followed by phrase-final position and frequency of occurrence, then metric placement. Experiment 2 demonstrates that controlling for scale membership greatly attenuates the relationship between occurrence frequencies and probe-tone ratings, with frequencies explaining only 7% of variance in the tonal hierarchy beyond scale membership. Experiment 3 shows that phrase-final position, metric placement, and frequency best differentiate the tonic from other scale degrees, with all other predictors failing to reach significance. Together, these results suggest that higher frequency counts correspond to a more general tendency for tonally stable scale degrees to be emphasized across multiple musical dimensions and that frequency of occurrence is not a uniquely informative cue of tonal emphasis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.