Abstract:In this paper, we report on a free-hand motion capture study in which 32 participants 'traced' 16 melodic vocal phrases with their hands in the air in two experimental conditions. Melodic contours are often thought of as correlated with vertical movement (up and down) in time, and this was also our initial expectation. We did find an arch shape for most of the tracings, although this did not correspond directly to the melodic contours. Furthermore, representation of pitch in the vertical dimension was but one of a diverse range of movement strategies used to trace the melodies. Six different mapping strategies were observed, and these strategies have been quantified and statistically tested. The conclusion is that metaphorical representation is much more common than a 'graph-like' rendering for such a melodic sound-tracing task. Other findings include a clear gender difference for some of the tracing strategies and an unexpected representation of melodies in terms of a small object for some of the Hindustani music examples. The data also show a tendency of participants moving within a shared 'social box'.
This paper presents an exploratory production study of Bharatanatyam, a figurative(narrative) dance. We investigate the encoding of coreference vs. disjoint reference in thisdance and argue that a formal semantics of narrative dance can be modeled in line withAbusch’s (2013, 2014, 2015) semantics of visual narrative (drawing also on Schlenker’s,2017a, approach to music semantics). A main finding of our investigation is that larger-levelgroup-boundaries (Charnavel, 2016) can be seen as triggers for discontinuity inferences(possibly involving the dynamic shift from one salient entity to another).Keywords: co-reference, disjoint reference, dance semantics, iconic semantics, picturesemantics.
Cross-modal integration is ubiquitous within perception and, in humans, the McGurk effect demonstrates that seeing a person articulating speech can change what we hear into a new auditory percept. It remains unclear whether cross-modal integration of sight and sound generalizes to other visible vocal articulations like those made by singers. We surmise that perceptual integrative effects should involve music deeply, since there is ample indeterminacy and variability in its auditory signals. We show that switching videos of sung musical intervals changes systematically the estimated distance between two notes of a musical interval so that pairing the video of a smaller sung interval to a relatively larger auditory led to compression effects on rated intervals, whereas the reverse led to a stretching effect. In addition, after seeing a visually switched video of an equally-tempered sung interval and then hearing the same interval played on the piano, the two intervals were judged often different though they differed only in instrument. These findings reveal spontaneous, cross-modal, integration of vocal sounds and clearly indicate that strong integration of sound and sight can occur beyond the articulations of natural speech.
is paper describes an experiment in which the subjects performed a sound-tracing task to vocal melodies. ey could move freely in the air with two hands, and their motion was captured using an infrared, marker-based system. We present a typology of distinct strategies used by the recruited participants to represent their perception of the melodies. ese strategies appear as ways to represent time and space through the nite motion possibilities of two hands moving freely in space. We observe these strategies and present their typology through qualitative analysis. en we numerically verify the consistency of these strategies by conducting tests of signi cance between labeled and random samples.
As formal theoretical linguistic methodology has matured, recent years have seen the advent of applying it to objects of study that transcend language, e.g., to the syntax and semantics of music (Lerdahl & Jackendoff 1983, Schlenker 2017a; see also Rebuschat et al. 2011). One of the aims of such extensions is to shed new light on how meaning is construed in a range of communicative systems. In this paper, we approach this goal by looking at narrative dance in the form of Bharatanatyam. We argue that a semantic approach to dance can be modeled closely after the formal semantics of visual narrative proposed by Abusch (2013, 2014, 2021). A central conclusion is that dance not only shares properties of other fundamentally human means of expression, such as visual narrative and music, but that it also exhibits similarities to sign languages and the gestures of non-signers (see, e.g., Schlenker 2020) in that it uses space to track individuals in a narrative and performatively portray the actions of those individuals. From the perspective of general human cognition, these conclusions corroborate the idea that linguistic investigations beyond language (see Patel-Grosz et al. forthcoming) can yield insights into the very nature of the human mind and of the communicative devices that it avails.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.