FEELINGS IN RESPONSE TO music are often accompanied by measurable bodily reactions such as goose bumps or shivers down the spine, commonly called "chills." In order to investigate distinct acoustical and musical structural elements related to chill reactions, reported chill reactions and bodily reactions were measured continuously. Chill reactions did not show a simple stimulus-response pattern or depend on personality traits, such as low sensation seeking and high reward dependence. Musical preferences and listening situations also played a role in chill reactions. Participants seemed to react to musical patterns, not to mere acoustical triggers. The entry of a voice and changes in volume were shown to be the most reactive patterns. These results were also confirmed by a retest experiment.
Most people are able to identify basic emotions expressed in music and experience affective reactions to music. But does music generally induce emotion? Does it elicit subjective feelings, physiological arousal, and motor reactions reliably in different individuals? In this interdisciplinary study, measurement of skin conductance, facial muscle activity, and self-monitoring were synchronized with musical stimuli. A group of 38 participants listened to classical, rock, and pop music and reported their feelings in a two-dimensional emotion space during listening. The first entrance of a solo voice or choir and the beginning of new sections were found to elicit interindividual changes in subjective feelings and physiological arousal. Quincy Jones' "Bossa Nova" motivated movement and laughing in more than half of the participants. Bodily reactions such as "goose bumps" and "shivers" could be stimulated by the "Tuba Mirum" from Mozart's Requiem in 7 of 38 participants. In addition, the authors repeated the experiment seven times with one participant to examine intraindividual stability of effects. This exploratory combination of approaches throws a new light on the astonishing complexity of affective music listening.
Since Kate Hevner's early investigations on the perception of emotions while listening to music (Hevner, 1936), there have been many different approaches to the measurement of emotions. For example, Gabrielsson and Lindström Wik (2003) investigated musical expression by describing the verbal reports given by subjects. Most researchers have used distinct adjective scales for the rating of perceived or expressed emotions. Schlosberg (1954) found that such scales can be mapped onto two or three dimensions. Using these methods, self-reported data were collected on distinct and mostly nonequidistant points in time, which were chosen intuitively by the researcher.The experience of emotion in music and films unfolds over time: This has led to increasing interest in the continuous measurement of perceived emotions, made possible by technological developments since the 1990s. Subjects' responses can now be recorded in real time and synchronized to the stimuli. Schubert (1996Schubert ( , 2001Schubert ( /2002Schubert ( , 2004aSchubert ( , 2004b was one of the first researchers to develop software that focuses on the perception of the temporal dynamics of emotion.However, up to now, methods for the recording of continuous responses were based on researcher-developed software solutions, and there was no agreement on the technical means, interfaces, or methods to use. The main aim of this contribution to the ongoing discussion is to propose standardized methods for the continuous measurement of self-reported emotions. The authors have designed new software, EMuJoy, for this purpose. It is freeware and can be distributed and used for research.Before describing our integrated software solution, we will address four questions: (1) the dimensionality of the emotion space, (2) technical aspects of data recording, (3) construction of the interface, and (4) the use of multiple stimulus modalities. Dimensionality of the Emotion SpaceUtilizing similarity ratings of affect-descriptive terms, Russell (1980) demonstrated in his circumplex model of affects that such terms can be mapped onto a two-dimensional space. He analyzed the similarity matrix, resulting in a twodimensional space containing the affective terms. This space was then scaled to fit the dimensions pleasure-displeasure (i.e., valence) and degree of arousal. Some researchers also use a third dimension-namely, dominance (Russell & Mehrabian, 1977). However, arousal and valence appear to be sufficient to explain most of the variance of affective scales (Lang, 1995). Moreover, the use of a computer monitor restricts the software to two dimensions, of which valence and arousal appear to be the most important and universal (Russell, 1983). The use of other dimensions, such as pleasantness and liking, has recently been discussed by Ritossa and Rickard (2004).With respect to emotions in music, Schubert used Russell's model of two basic emotional dimensions in his Emotionspace Lab (2DES;Schubert, 1996). He proved, as Russell did, the validity and reliability of arousal and valence ...
Music has often been shown to induce emotion in listeners and is also often heard in social contexts (e.g., concerts, parties, etc.), yet until now, the influences of social settings on the emotions experienced by listeners was not known. This exploratory study investigated whether listening to music in a group setting alters the emotion felt by listeners. The emotional reactions to 10 musical excerpts were measured both psychologically (rating on retrospective questionnaires and button presses indicated the experience of a chill, defined as the experience of a shiver down the spine or goose pimples) and physiologically (skin conductance response) using a new, innovative multi-channel measuring device. In a repeated measures design, 14 members of an amateur orchestra (7 male, 7 female; mean age 29) came in for two testing sessions: once alone, and once as a group. Chills were validated in the data analysis: each chill was counted only if the button press was accompanied by a corresponding skin conductance response. The results showed no differences between conditions (group vs. solitary) for retrospective emotion ratings; however, the number of validated chills did show a non-significant trend towards experiencing more chills in the solitary listening session. Also, skin conductance responses during chills were significantly higher during the solitary listening condition. This and other results suggested that music listening was more arousing alone, possibly due to the lack of social feedback and of concentration on the music in the group setting
Music can arouse ecstatic "chill" experiences defined as "goose pimples" and as "shivers down the spine." We recorded chills both via subjects' self-reports and physiological reactions, finding that they do not occur in a reflex-like manner, but as a result of attentive, experienced, and conscious musical enjoyment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.