This study focuses on the performer-listener link of the chain of musical communication. Using different perceptual methods (categorization, matching, and rating), as well as acoustical analyses of timing and amplitude, we found that both musicians and nonmusicians could discern among the levels of expressive intent of violin, trumpet, clarinet, oboe, and piano performers. Time-contour profiles showed distinct signatures between instruments and across expressive levels, which affords a basis for perceptual discrimination. For example, for "appropriate" expressive performances, a gradual lengthening of successive durations leads to the cadence. Although synthesized versions based on performance timings led to less response accuracy than did the complete natural performance, evidence suggests that timing may be more salient as a perceptual cue than amplitude. We outline a metabolic communication theory of musical expression that is based on a system of sequences of states, and changes of state, which fill gaps of inexorable time. We assume that musical states have a flexible, topologically deformable nature. Our conception allows for hierarchies and structure in active music processing that static generative grammars do not. This theory is supported by the data, in which patterns of timings and amplitudes differed among and between instruments and levels of expression.
This study was designed to explore the kinds of temporal patterning that foster pitch-difference discrimination. Musicians and nonmusicians rated the similarity of pairs of 9-note melodies that could differ in the pitch chroma of a single note at any of five serial positions. In a complete factorial design, there were 84 standard melodies (4 pitch patterns x 21 rhythms), each of which was paired with 10 octave-raised comparisons; 5 comparisons were identical to the standard in chroma and 5 had a single changed chroma. A literature review suggested that temporal accent occurs for tones initiating a lengthened temporal interval and for tones initiating a group of three or more intervals; pitch-level accent is a product of pitch skips on the order of 4 semitones or of the change of direction of the pitch contour. In this study there were three classes of temporal patterns. Rhythmically consonant patterns had temporal accenting that was always metrically in phase with pitch-level accenting and promoted the best performance. Rhythmically out-of-phase consonant patterns had temporal accenting and pitch-level accenting that occurred regularly at the same metrical rate, but the two were never in phase. Rhythmically dissonant patterns had temporal accenting and pitch-level accenting at different metrical rates. Patterns in the latter two classes sound syncopated, and they generally resulted in poorer pitch-discrimination performance. Musicians performed better than nonmusicians on all patterns; however, an account of performance in terms of "rhythmic nonconsonance" generated by the above three categories predicted 63% and 42% of the variance in musicians' and nonmusicians' performance, respectively. Performance at all serial positions was generally best for tones initiating long sound-filled intervals and was also better at a particular serial position when pitch-level accenting took the form of a pitch contour inflection instead of a unidirectional pitch skip. There was some evidence that rhythmic consonance early in a pattern improved musicians' performance at a later serial position.According to Cooper and Meyer (1960), accenting is the basis for grouping in melodies. We assume that melodies whose pitches may be easily grouped constitute "good Gestalts" that are more easily coded and remembered. Accent is a perceptual phenomenon that is usually but not necessarily correlated with cues that may occur in each of several physical dimensions. Monahan and Carterette (1985) distinguished five major sources of cues for accent in monophonic melodies: (1) temporal patterning, (2) pitch-pattern shape (pitch contour and pitch interval sizes), (3) dynamic patterning, (4) the tonal system to which the set of pitches belong, and (5) timbral patterning. In our discussion we will refer to physical cues as accenting and to perceptual phenomena as accent.
In this study, the authors investigate the relationship between the musical soundtrack and visual images in the motion picture experience. Five scenes were selected from a commercial motion picture along with their composer-intended musical scores. Each soundtrack was paired with every visual excerpt, resulting in a total of 25 audiovisual composites. In Experiment 1, the 16 subjects selected the composite in which the pairing was considered the "best fit." Results indicated that the composer-intended musical score was identified as the best fit by the majority of subjects for all conditions. In Experiment 2, the 15 subjects rated all 25 composites on semantic differential scales. A significant interaction (p < .00005) between audiovisual combination and the various semantic differential scales was found. Analysis of this interaction revealed that the composer-intended combination yielded higher mean scores in response to the 4 adjective pairs of the Evaluative dimension. Clustering the subject responses into 2 factor scores (Evaluative vs. a hybrid of Activity and Potency), confirmed these high Evaluative mean scores. In addition, the response contours of the Activity/Potency dimension remained relatively consistent, suggesting that music exercises a strong and consistent influence over the subject responses to an audiovisual composite, regardless of visual stimulus. The results corroborate previous research, indicating that a musical soundtrack can change the "meaning" of a film presentation. Comparison of the various soundtracks in music theoretical terms assisted in identifying musical elements that appeared to be relevant to specific subject ratings. These comparisons were utilized in the formulation of a model for music communication in the context of the motion picture experience. Music has played an integral part in the motion picture experience almost since its inception. 1 Even so-called "silent films" were usually accompanied by musical performers. Considering the popularity of this artform and the fact that it has developed into a multi-billion dollar industry, it is quite surprising that there has been little empirical investigation into the role of film music. In the present study, the authors investigate the relationship between visual activity on-screen and the musical soundtrack. Two specific questions are of particular interest. First, can listeners reliably select the composer's intended soundtrack for a given visual scene from among several musical selections? Second, does a significant amount of variation occur in the perceptual response to a given scene when the visual stimulus remains constant and only the music is changed? 60 Psychomusicology • Spring/Fall 1994Related Literature There has been much speculation about the interaction of music and the visual element in motion pictures (
A study on the verbal attributes of timbre was conducted in an effort to interpret the dimensional configuration of the similarity spaces of simultaneously sounding wind instrument timbres. In the first experiment, subjects rated 10 wind instrument dyads on eight factorially pure semantic differentials from von Bismarck's (1974a) experiments. Results showed that the semantic differentials failed to differentiate among the 10 timbres. The semantic differential methodology was changed to verbal attribute magnitude estimation (VAME), in which a timbre is assigned an amount of a given attribute. This procedure resulted in better differentiation among the 10 timbres, the first factor including attributes such as heavy, hard, and loud, the second factor involving sharp and complex, a contrast with von Bismarck's results. Results of the VAME analysis separated alto saxophone dyads from all others, but mapped only moderately well onto the perceptual similarity spaces. It was suggested that many of the von Bismarck adjectives lacked ecological validity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.