Lundqvist, Carlsson, Hilmersson, and Juslin (2009) presented evidence of differential autonomic emotional responses to “happy” and “sad” music in healthy adult listeners. The present study sought to replicate and extend these findings by employing a similar research design and measurement instruments. Therefore, we used instrumental film music instead of vocal music, and assessed listeners’ music expertise. The present results show similarities and differences in patterns of psychological and physiological responses as compared to the previous work. Happy music evoked more happiness, higher skin conductance level, higher respiratory rate, and more zygomatic facial muscle activity than sad music, whereas sad music generated higher corrugator muscle activity than happy music. Influences of music sophistication as well as of sex were negligible. Taken together, these results further support the hypothesis that music induces differential autonomic emotional responses in healthy listeners. They also highlight the importance of replication or multi-site studies to strengthen the empirical basis of fundamental issues in music psychological research.
It is unknown to what extent listeners in different Western countries share long-term representations of melodies as well as their genre associations, and whether such knowledge is modulated through music training. A group of German listeners (N = 40) rated their familiarity with 144 melody excerpts from different genres implicitly (melody structure) and explicitly (melody title). Melodies were identical to those used in a previous Franco-Canadian study (Peretz, Babaï, Lussier, Hébert, & Gagnon, 1995). In addition, melodies were attributed by the participants to predefined genre categories, and similarities between pairs of melodies were computed, using an algorithm by Müllensiefen and Frieler (2006). Results revealed patterns of (un)familiarity, which, in part, deviated from the previous study. Melodies from classical, ceremonial, and -to a lesser extent -children's songs categories were rated as most familiar, whereas traditional and more recent francophone tunes from mixed categories were judged as unfamiliar. Music training had no significant influence on implicit memory for melodies but rather on explicit knowledge of their titles. Computational analyses suggest that highly familiar and highly unfamiliar tunes share structural features with melodies belonging to the same category, whereas dissimilarities were detected between certain clusters of genre categories. Taken together, these results suggest that long-term representation of melodies is influenced by a listener's (Western) national background. Representations are differently affected by specific genres but only partially influenced by music training and by structural properties.
We investigated the effects of familiarity, level of musical expertise, musical tempo, and structural boundaries on the identification of familiar and unfamiliar tunes. Healthy Western listeners (N = 62; age range 14–64 years) judged their level of familiarity with a preselected set of melodies when the number of tones of a given melody was increased from trial to trial according to the so-called gating paradigm. The number of tones served as one dependent measure. The second dependent measure was the physical duration of the stimulus presentation until listeners identified a melody as familiar or unfamiliar. Results corroborate previous work, suggesting that listeners need less information to recognize familiar as compared to unfamiliar melodies. Both decreasing and increasing the original tempo by a factor of two delayed the identification of familiar melodies. Furthermore, listeners had more difficulty identifying unfamiliar melodies when tempo was increased. Finally, musical expertise significantly influenced identification of either melodic category, i.e., reducing the required number of tones. Taken together, the findings support theories which suggest that tempo information is coded in melody representation, and that musical expertise is associated with especially efficient strategies for accessing long-term representations of melodic materials.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.