Many mating signals consist of multimodal components that need decoding by several sensory modalities on the receiver's side. For methodological and conceptual reasons, the communicative functions of these signals are often investigated only one at a time. Likewise, variation of single signal traits are frequently correlated by researchers with senders' quality or receivers' behavioral responses. Consequently, the two classic and still dominating hypotheses regarding the communicative meaning of multimodal mating signals postulate that different components either serve as backup messages or provide multiple meanings. Here we discuss how this conceptual dichotomy might have hampered a more integrative, perception encompassing understanding of multimodal communication: neither the multiple message nor the backup signal hypotheses address the possibility that multimodal signals are integrated neurally into one percept. Therefore, when studying multimodal mating signals, we should be aware that they can give rise to multimodal percepts. This means that receivers can gain access to additional information inherent in combined signal components only ("the whole is something different than the sum of its parts"). We review the evidence for the importance of multimodal percepts and outline potential avenues for discovery of multimodal percepts in animal communication.
Understanding animal behaviour through psychophysical experimentation is often limited by insufficiently realistic stimulus representation. Important physical dimensions of signals and cues, especially those that are outside the spectrum of human perception, can be difficult to standardize and control separately with currently available recording and displaying techniques (e.g. video displays). Accurate stimulus control is in particular important when studying multimodal signals, as spatial and temporal alignment between stimuli is often crucial. Especially for audiovisual presentations, some of these limitations can be circumvented by the employment of animal robots that are superior to video presentations in all situations requiring realistic 3D presentations to animals. Here we report the development of a robotic zebra finch, called RoboFinch, and how it can be used to study vocal learning in a songbird, the zebra finch.
Bird song and human speech are learned early in life and for both cases engagement with live social tutors generally leads to better learning outcomes than passive audio-only exposure. Real-world tutor–tutee relations are normally not uni- but multimodal and observations suggest that visual cues related to sound production might enhance vocal learning. We tested this hypothesis by pairing appropriate, colour-realistic, high frame-rate videos of a singing adult male zebra finch tutor with song playbacks and presenting these stimuli to juvenile zebra finches (Taeniopygia guttata). Juveniles exposed to song playbacks combined with video presentation of a singing bird approached the stimulus more often and spent more time close to it than juveniles exposed to audio playback only or audio playback combined with pixelated and time-reversed videos. However, higher engagement with the realistic audio–visual stimuli was not predictive of better song learning. Thus, although multimodality increased stimulus engagement and biologically relevant video content was more salient than colour and movement equivalent videos, the higher engagement with the realistic audio–visual stimuli did not lead to enhanced vocal learning. Whether the lack of three-dimensionality of a video tutor and/or the lack of meaningful social interaction make them less suitable for facilitating song learning than audio–visual exposure to a live tutor remains to be tested.
Singing in birds is accompanied by beak, head and throat movements. The role of these visual cues has long been hypothesised to be an important facilitator in vocal communication, including social interactions and song acquisition, but has seen little experimental study. To address whether audio‐visual cues are relevant for birdsong we used high‐speed video recording, 3D scanning, 3D printing technology and colour‐realistic painting to create RoboFinch, an open source adult‐mimicking robot which matches temporal and chromatic properties of songbird vision. We exposed several groups of juvenile zebra finches during their song developmental phase to one of six singing robots that moved their beaks synchronised to their song and compared them with birds in a non‐synchronised and two control treatments. Juveniles in the synchronised treatment approached the robot setup from the start of the experiment and progressively increased the time they spent singing, contra to the other treatment groups. Interestingly, birds in the synchronised group seemed to actively listen during tutor song playback, as they sung less during the actual song playback compared to the birds in the asynchronous and audio‐only control treatments. Our open source RoboFinch setup thus provides an unprecedented tool for systematic study of the functionality and integration of audio‐visual cues associated with song behaviour. Realistic head and beak movements aligned to specific song elements may allow future studies to assess the importance of multisensory cues during song development, sexual signalling and social behaviour. All software and assembly instructions are open source, and the robot can be easily adapted to other species. Experimental manipulations of stimulus combinations and synchronisation can further elucidate how audio‐visual cues are integrated by receivers and how they may enhance signal detection, recognition, learning and memory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.