As animals vocalize, their vocal organ transforms motor commands into vocalizations for social communication. In birds, the physical mechanisms by which vocalizations are produced and controlled remain unresolved because of the extreme difficulty in obtaining in vivo measurements. Here, we introduce an ex vivo preparation of the avian vocal organ that allows simultaneous high-speed imaging, muscle stimulation and kinematic and acoustic analyses to reveal the mechanisms of vocal production in birds across a wide range of taxa. Remarkably, we show that all species tested employ the myoelastic-aerodynamic (MEAD) mechanism, the same mechanism used to produce human speech. Furthermore, we show substantial redundancy in the control of key vocal parameters ex vivo, suggesting that in vivo vocalizations may also not be specified by unique motor commands. We propose that such motor redundancy can aid vocal learning and is common to MEAD sound production across birds and mammals, including humans.
The deep learning (DL) revolution is touching all scientific disciplines and corners of our lives as a means of harnessing the power of big data. Marine ecology is no exception. New methods provide analysis of data from sensors, cameras, and acoustic recorders, even in real time, in ways that are reproducible and rapid. Off-the-shelf algorithms find, count, and classify species from digital images or video and detect cryptic patterns in noisy data. These endeavours require collaboration across ecological and data science disciplines, which can be challenging to initiate. To promote the use of DL towards ecosystem-based management of the sea, this paper aims to bridge the gap between marine ecologists and computer scientists. We provide insight into popular DL approaches for ecological data analysis, focusing on supervised learning techniques with deep neural networks, and illustrate challenges and opportunities through established and emerging applications of DL to marine ecology. We present case studies on plankton, fish, marine mammals, pollution, and nutrient cycling that involve object detection, classification, tracking, and segmentation of visualized data. We conclude with a broad outlook of the field’s opportunities and challenges, including potential technological advances and issues with managing complex data sets.
Vocal expression of emotions has been observed across species and could provide a non-invasive and reliable means to assess animal emotions. We investigated if pig vocal indicators of emotions revealed in previous studies are valid across call types and contexts, and could potentially be used to develop an automated emotion monitoring tool. We performed an analysis of an extensive and unique dataset of low (LF) and high frequency (HF) calls emitted by pigs across numerous commercial contexts from birth to slaughter (7414 calls from 411 pigs). Our results revealed that the valence attributed to the contexts of production (positive versus negative) affected all investigated parameters in both LF and HF. Similarly, the context category affected all parameters. We then tested two different automated methods for call classification; a neural network revealed much higher classification accuracy compared to a permuted discriminant function analysis (pDFA), both for the valence (neural network: 91.5%; pDFA analysis weighted average across LF and HF (cross-classified): 61.7% with a chance level at 50.5%) and context (neural network: 81.5%; pDFA analysis weighted average across LF and HF (cross-classified): 19.4% with a chance level at 14.3%). These results suggest that an automated recognition system can be developed to monitor pig welfare on-farm.
Passive acoustic monitoring has proven to be an indispensable tool for many aspects of baleen whale research. Manual detection of whale calls on these large data sets demands extensive manual labor. Automated whale call detectors offer a more efficient approach and have been developed for many species and call types. However, calls with a large level of variability such as fin whale (Balaenoptera physalus) 40 Hz call and blue whale (B. musculus) D call have been challenging to detect automatically and hence no practical automated detector exists for these two call types. Using a modular approach consisting of faster region-based convolutional neural network followed by a convolutional neural network, we have created automated detectors for 40 Hz calls and D calls. Both detectors were tested on recordings with high- and low density of calls and, when selecting for detections with high classification scores, they were shown to have precision ranging from 54% to 57% with recall ranging from 72% to 78% for 40 Hz and precision ranging from 62% to 64% with recall ranging from 70 to 73% for D calls. As these two call types are produced by both sexes, using them in long-term studies would remove sex-bias in estimates of temporal presence and movement patterns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.