Among-individual variation in performance on cognitive tasks is ubiquitous across species that have been examined, and understanding the evolution of cognitive abilities requires investigating among-individual variation because natural selection acts on individual differences. However, relatively little is known about the extent to which individual differences in cognition are determined by domain-specific compared with domain-general cognitive abilities. We examined individual differences in learning speed of zebra finches across seven different tasks to determine the extent of domain-specific versus domain-general learning abilities, as well as the relationship between learning speed and learning generalization. Thirty-two zebra finches completed a foraging board experiment that included visual and structural discriminations, and then these same birds went through an acoustic operant discrimination experiment that required discriminating between different natural categories of acoustic stimuli. We found evidence of domain-general learning abilities as birds’ relative performance on the seven learning tasks was weakly repeatable and a principal components analysis found a first principal component that explained 36% of the variance in performance across tasks with all tasks loading unidirectionally on this component. However, the few significant correlations between tasks and high repeatability within each experiment suggest the potential for domain-specific abilities. Learning speed did not influence an individual’s ability to generalize learning. These results suggest that zebra finch performance across visual, structural, and auditory learning relies upon some common mechanism; some might call this evidence of “general intelligence”( g ), but it is also possible that this finding is due to other noncognitive mechanisms such as motivation. Supplementary Information The online version contains supplementary material available at 10.3758/s13420-022-00520-w.
When anthropogenic noise occurs simultaneously with an acoustic signal or cue, it can be difficult for an animal to interpret the information encoded within vocalizations. However, limited research has focused on how anthropogenic noise affects the identification of acoustic communication signals. In songbirds, research has also shown that black-capped chickadees (Poecile atricapillus) will shift the pitch and change the frequency at which they sing in the presence of anthropogenic, and experimental noise. Black-capped chickadees produce several vocalizations; their fee-bee song is used for mate attraction and territorial defence, and contains information about dominance hierarchy and native geographic location. Previously, we demonstrated that black-capped chickadees can discriminate between individual female chickadees via their fee-bee songs. Here we used an operant discrimination go/no-go paradigm to discern whether the ability to discriminate between individual female chickadees by their song would be impacted by differing levels of anthropogenic noise. Following discrimination training, two levels of anthropogenic noise (low: 40 dB SPL; high: 75 dB SPL) were played with stimuli to determine how anthropogenic noise would impact discrimination. Results showed that even with low-level noise (40 dB SPL) performance decreased and high-level (75 dB SPL) noise was increasingly detrimental to discrimination. We learned that perception of fee-bee songs does change in the presence of anthropogenic noise such that birds take significantly longer to learn to discriminate between females, but birds were able to generalize responding after learning the discrimination. These results add to the growing literature underscoring the impact of human-made noise on avian wildlife, specifically the impact on perception of auditory signals.
Bioacoustic analysis has been used for a variety of purposes including classifying vocalizations for biodiversity monitoring and understanding mechanisms of cognitive processes. A wide range of statistical methods, including various automated methods, have been used to successfully classify vocalizations based on species, sex, geography, and individual. A comprehensive approach focusing on identifying acoustic features putatively involved in classification is required for the prediction of features necessary for discrimination in the real world. Here, we used several classification techniques, namely discriminant function analyses (DFAs), support vector machines (SVMs), and artificial neural networks (ANNs), for sex-based classification of zebra finch ( Taeniopygia guttata) distance calls using acoustic features measured from spectrograms. We found that all three methods (DFAs, SVMs, and ANNs) correctly classified the calls to respective sex-based categories with high accuracy between 92 and 96%. Frequency modulation of ascending frequency, total duration, and end frequency of the distance call were the most predictive features underlying this classification in all of our models. Our results corroborate evidence of the importance of total call duration and frequency modulation in the classification of male and female distance calls. Moreover, we provide a methodological approach for bioacoustic classification problems using multiple statistical analyses.
In songbirds, song has traditionally been considered a vocalization mainly produced by males. However, recent research suggests that both sexes produce song. While the function and structure of male black-capped chickadee (Poecile atricapillus) fee-bee song have been well-studied, research on female song is comparatively limited. Past discrimination and playback studies have shown that male black-capped chickadees can discriminate between individual males via their fee-bee songs. Recently, we have shown that male and female black-capped chickadees can identify individual females via their fee-bee song even when presented with only the bee position of the song. Our results using discriminant function analyses (DFA) support that female songs are individually distinctive. We found that songs could be correctly classified to the individual (81%) and season (97%) based on several acoustic features including but not limited to bee-note duration and fee-note peak frequency. In addition, an artificial neural network was trained to identify individuals based on the selected DFA acoustic features and was able to achieve 90% accuracy by individual and 93% by season. While this study provides a quantitative description of the acoustic structure of female song, the perception and function of female song in this species requires further investigation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.