Most contemporary Western performing arts practices restrict creative interactions from audiences. Open Symphony is designed to explore audience-performer interaction in live music performance assisted by digital technology. Audiences can conduct improvising performers by voting for various musical 'modes'. Technological components include a web-based mobile application, a visual client displaying generated symbolic scores, and a server service for the exchange of creative data. The interaction model, app and visualisation were designed through an iterative participatory design process. The visualisation communicates audience directions to performers upon which to improvise music, and enables the audience to get feedback on their voting. The system was experienced by about 120 audience and performer participants (35 completed surveys) in controlled (lab) and "real world" settings. Feedback on usability and user experience was overall positive and live interactions demonstrate significant levels of audience creative engagement. We identified further design challenges around audience sense of control, learnability and compositional structure.
Our Open Symphony system reimagines the music experience for a digital age, fostering alliances between performer and audience and our digital selves. Open Symphony enables live participatory music performance where the audience actively engages in the music creation process. This is made possible by using stateof-the-art web technologies and data visualisation techniques. Through collaborations with local performers we will conduct a series of interactive music performance revolutionizing the performance experience both for performers and audiences. The system throws open music-creating possibilities to every participant and is a genuine novel way to demonstrate the field of Human Computer Interaction through computer-supported cooperative creation and multimodal music and visual perception.
Immersive virtual environments (IVEs) present rich possibilities for the experimental study of non-verbal communication. Here, the 'digital chameleon' effect, -which suggests that a virtual speaker (agent) is more persuasive if they mimic their addresses head movements-, was tested. Using a specially constructed IVE, we recreate a fullbody analogue version of the 'digital chameleon' experiment. The agent's behaviour is manipulated in three conditions 1) Mimic (Chameleon) in which it copies the participant's nodding 2) Playback (Nodding Dog) which uses nods from playback of a previous participant and are therefore unconnected with the content and 3) Original (Human) in which it uses the prerecorded actor's movements. The results do not support the original finding of differences in ratings of agent persuasiveness between conditions. However, motion capture data reveals systematic differences in a) the realtime movements of speakers and listeners b) between the Original, Mimic and Playback conditions. We conclude that the automatic mimicry model is too simplistic and that this paradigm must address the reciprocal dynamics of non-verbal interaction to achieve its full potential.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.