Summary1. Quantitative aspects of the study of animal and human behaviour are increasingly relevant to test hypotheses and find empirical support for them. At the same time, photo and video cameras can store a large number of video recordings and are often used to monitor the subjects remotely. Researchers frequently face the need to code considerable quantities of video recordings with relatively flexible software, often constrained by speciesspecific options or exact settings. 2. BORIS is a free, open-source and multiplatform standalone program that allows a user-specific coding environment to be set for a computer-based review of previously recorded videos or live observations. Being open to user-specific settings, the program allows a project-based ethogram to be defined that can then be shared with collaborators, or can be imported or modified. 3. Projects created in BORIS can include a list of observations, and each observation may include one or two videos (e.g. simultaneous screening of visual stimuli and the subject being tested; recordings from different sides of an aquarium). Once the user has set an ethogram, including state or point events or both, coding can be performed using previously assigned keys on the computer keyboard. BORIS allows definition of an unlimited number of events (states/point events) and subjects. 4. Once the coding process is completed, the program can extract a time-budget or single or grouped observations automatically and present an at-a-glance summary of the main behavioural features. The observation data and time-budget analysis can be exported in many common formats (TSV, CSV, ODF, XLS, SQL and JSON). The observed events can be plotted and exported in various graphic formats (SVG, PNG, JPG, TIFF, EPS and PDF).
A crucial, common feature of speech and music is that they show non-random structures over time. It is an open question which of the other species share rhythmic abilities with humans, but in most cases the lack of knowledge about their behavioral displays prevents further studies. Indris are the only lemurs who sing. They produce loud howling cries that can be heard at several kilometers, in which all members of a group usually sing. We tested whether overlapping and turn-taking during the songs followed a precise pattern by analysing the temporal structure of the individuals' contribution to the song. We found that both dominants (males and females) and non-dominants influenced the onset timing one another. We have found that the dominant male and the dominant female in a group overlapped each other more frequently than they did with the non-dominants. We then focused on the temporal and frequency structure of particular phrases occurring during the song. Our results show that males and females have dimorphic inter-onset intervals during the phrases. Moreover, median frequencies of the unit emitted in the phrases also differ between the sexes, with males showing higher frequencies when compared to females. We have not found an effect of age on the temporal and spectral structure of the phrases. These results indicate that singing in indris has a high behavioral flexibility and varies according to social and individual factors. The flexible spectral structure of the phrases given during the song may underlie perceptual abilities that are relatively unknown in other non-human primates, such as the ability to recognize particular pitch patterns.
Contextual variation in the loud calls of strepsirhine primates is poorly understood. To understand whether songs given by indris in different contexts represent acoustically distinct variants and have the potential to elicit context-specific behaviours in conspecific listeners, we investigated the acoustic variability of these songs and the distance travelled by vocalizers after their emissions. Songs of 41 individuals were recorded from 16 indri groups in four different forest sites in eastern Madagascar. We collected a total of 270 duets and choruses arising during territorial defence, advertisement and cohesion. We demonstrated that the structure of indri songs conveyed context-specific information through their overall duration, but shared the sequential pattern of harsh units (roars) followed by long notes and, finally, descending phrases. Analysing in detail individual contributions to advertisement songs and cohesion songs, we found that the acoustic structure of units could be classified correctly with a high degree of reliability (96.23% of long notes, 80.16% of the descending phrases, 72.54% of roars). Future investigations using playback stimuli could explore the relationship between acoustic features and the information transmitted by the song.
The African penguin is a nesting seabird endemic to southern Africa. In penguins of the genus Spheniscus vocalisations are important for social recognition. However, it is not clear which acoustic features of calls can encode individual identity information. We recorded contact calls and ecstatic display songs of 12 adult birds from a captive colony. For each vocalisation, we measured 31 spectral and temporal acoustic parameters related to both source and filter components of calls. For each parameter, we calculated the Potential of Individual Coding (PIC). The acoustic parameters showing PIC ≥ 1.1 were used to perform a stepwise cross-validated discriminant function analysis (DFA). The DFA correctly classified 66.1% of the contact calls and 62.5% of display songs to the correct individual. The DFA also resulted in the further selection of 10 acoustic features for contact calls and 9 for display songs that were important for vocal individuality. Our results suggest that studying the anatomical constraints that influence nesting penguin vocalisations from a source-filter perspective, can lead to a much better understanding of the acoustic cues of individuality contained in their calls. This approach could be further extended to study and understand vocal communication in other bird species.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.