In an experiment aimed at assessing dimensional properties of musical space, musicians rated the similarity of pairs of brief melodies on a 9-point scale. From our review of previous work, we hypothesized (1) that pitch variables would be considered more important than time or rhythmic variables by our subjects and (2) that the metrical consonance of pitch and duration patterns would generate a factor related to pattern regularity in listeners' musical space. Four melodies and their inversions were played in each of four rhythmic patterns (anapestic, dactylic, iambic, and trochaic) for a total of 1024 pattern pairs. Both multidimensional scaling and cluster analyses of similarity showed that at least five dimensions were needed for a good accounting of the perceptual space of these melodies. Surprisingly, the major dimensions found were rhythmic: (1) duple or triple rhythm, (2) accent first or last, and (3) iambic-dactylic versus trochaic-anapestic. Other dimensions were (4) rising or falling pitch and (5) the number of pitch—contour inflections. The tendency to rate patterns on the basis of time or rhythm (Dimensions I, II, and III) was negatively correlated with the tendency to rate patterns on the basis of pitch (Dimensions IV and V). It could not be determined whether this result depends on cognitive processing limitations, attention, or preferences. No factor was found that related to pattern regularity as we defined it.
This study focuses on the performer-listener link of the chain of musical communication. Using different perceptual methods (categorization, matching, and rating), as well as acoustical analyses of timing and amplitude, we found that both musicians and nonmusicians could discern among the levels of expressive intent of violin, trumpet, clarinet, oboe, and piano performers. Time-contour profiles showed distinct signatures between instruments and across expressive levels, which affords a basis for perceptual discrimination. For example, for "appropriate" expressive performances, a gradual lengthening of successive durations leads to the cadence. Although synthesized versions based on performance timings led to less response accuracy than did the complete natural performance, evidence suggests that timing may be more salient as a perceptual cue than amplitude. We outline a metabolic communication theory of musical expression that is based on a system of sequences of states, and changes of state, which fill gaps of inexorable time. We assume that musical states have a flexible, topologically deformable nature. Our conception allows for hierarchies and structure in active music processing that static generative grammars do not. This theory is supported by the data, in which patterns of timings and amplitudes differed among and between instruments and levels of expression.
This study was designed to explore the kinds of temporal patterning that foster pitch-difference discrimination. Musicians and nonmusicians rated the similarity of pairs of 9-note melodies that could differ in the pitch chroma of a single note at any of five serial positions. In a complete factorial design, there were 84 standard melodies (4 pitch patterns x 21 rhythms), each of which was paired with 10 octave-raised comparisons; 5 comparisons were identical to the standard in chroma and 5 had a single changed chroma. A literature review suggested that temporal accent occurs for tones initiating a lengthened temporal interval and for tones initiating a group of three or more intervals; pitch-level accent is a product of pitch skips on the order of 4 semitones or of the change of direction of the pitch contour. In this study there were three classes of temporal patterns. Rhythmically consonant patterns had temporal accenting that was always metrically in phase with pitch-level accenting and promoted the best performance. Rhythmically out-of-phase consonant patterns had temporal accenting and pitch-level accenting that occurred regularly at the same metrical rate, but the two were never in phase. Rhythmically dissonant patterns had temporal accenting and pitch-level accenting at different metrical rates. Patterns in the latter two classes sound syncopated, and they generally resulted in poorer pitch-discrimination performance. Musicians performed better than nonmusicians on all patterns; however, an account of performance in terms of "rhythmic nonconsonance" generated by the above three categories predicted 63% and 42% of the variance in musicians' and nonmusicians' performance, respectively. Performance at all serial positions was generally best for tones initiating long sound-filled intervals and was also better at a particular serial position when pitch-level accenting took the form of a pitch contour inflection instead of a unidirectional pitch skip. There was some evidence that rhythmic consonance early in a pattern improved musicians' performance at a later serial position.According to Cooper and Meyer (1960), accenting is the basis for grouping in melodies. We assume that melodies whose pitches may be easily grouped constitute "good Gestalts" that are more easily coded and remembered. Accent is a perceptual phenomenon that is usually but not necessarily correlated with cues that may occur in each of several physical dimensions. Monahan and Carterette (1985) distinguished five major sources of cues for accent in monophonic melodies: (1) temporal patterning, (2) pitch-pattern shape (pitch contour and pitch interval sizes), (3) dynamic patterning, (4) the tonal system to which the set of pitches belong, and (5) timbral patterning. In our discussion we will refer to physical cues as accenting and to perceptual phenomena as accent.
Certain factors were investigated that affect the intelligibility of a speech message which is presented to a listener simultaneously with an interfering speech message. In two of the four experiments reported, filters were introduced into one of the two channels that carried the messages. Thresholds of perceptibility were not reliably decreased by moderate amounts of filtering of the received message. However, articulation scores were considerably increased by the use of a high-pass filter (500 cps) in either of the two channels. The great advantage of presenting one message to one ear and the interfering message to the other ear (dichotic presentation) was measured by changes in the thresholds of perceptibility and by articulation tests. Functional relations between thresholds of perceptibility for the message to be received and the intensity of an interfering signal were determined for both monaural and dichotic listening. In separate tests, noise was also used as the interfering signal. Dichotic reception permits a reduction in intensity of the received signal of about 30 db as compared with monaural reception. Articulation-gain functions demonstrated a similar advantage for dichotic over monaural listening. When the message to be received and the interfering message are monaurally received at equal intensities, the articulation scores for the designated messages are about 50 percent. If the message to be received is somewhat less intense than the interfering one, the cue value of the intensity difference offsets the increased masking of the less intense by the more intense message.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.