The striking ability of music to elicit emotions assures its prominent status in human culture and every day life. Music is often enjoyed and sought for its ability to induce or convey emotions, which may manifest in anything from a slight variation in mood, to changes in our physical condition and actions. Consequently, research on how we might associate musical pieces with emotions and, more generally, how music brings about an emotional response is attracting ever increasing attention. First, this paper provides a thorough review of studies on the relation of music and emotions from di↵erent disciplines. We then propose new insights to enhance automated music emotion recognition models using recent results from psychology, musicology, a↵ective computing, semantic technologies and music information retrieval.
Most contemporary Western performing arts practices restrict creative interactions from audiences. Open Symphony is designed to explore audience-performer interaction in live music performance assisted by digital technology. Audiences can conduct improvising performers by voting for various musical 'modes'. Technological components include a web-based mobile application, a visual client displaying generated symbolic scores, and a server service for the exchange of creative data. The interaction model, app and visualisation were designed through an iterative participatory design process. The visualisation communicates audience directions to performers upon which to improvise music, and enables the audience to get feedback on their voting. The system was experienced by about 120 audience and performer participants (35 completed surveys) in controlled (lab) and "real world" settings. Feedback on usability and user experience was overall positive and live interactions demonstrate significant levels of audience creative engagement. We identified further design challenges around audience sense of control, learnability and compositional structure.
This study investigates whether taking genre into account is beneficial for automatic music mood annotation in terms of core affects valence, arousal, and tension, as well as several other mood scales. Novel techniques employing genre-adaptive semantic computing and audio-based modelling are proposed. A technique called the ACTwg employs genre-adaptive semantic computing of mood-related social tags, whereas ACTwg-SLPwg combines semantic computing and audio-based modelling, both in a genre-adaptive manner. The proposed techniques are experimentally evaluated at predicting listener ratings related to a set of 600 popular music tracks spanning multiple genres. The results show that ACTwg outperforms a semantic computing technique that does not exploit genre information, and ACTwg-SLPwg outperforms conventional techniques and other genre-adaptive alternatives. In particular, improvements in the prediction rates are obtained for the valence dimension which is typically the most challenging core affect dimension for audio-based annotation. The specificity of genre categories is not crucial for the performance of ACTwg-SLPwg. The study also presents analytical insights into inferring a concise tag-based genre representation for genre-adaptive music mood analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.