To better understand spawning vocalizations of Norwegian coastal cod (Gadus morhua), a prototype eight-element coherent hydrophone array was deployed in stationary vertical and towed horizontal modes to monitor cod sounds during an experiment in spring 2019. Depth distribution of cod aggregations was monitored concurrently with an ultrasonic echosounder. Cod vocalizations recorded on the hydrophone array are analysed to provide time–frequency characteristics, and source level distribution after correcting for one-way transmission losses from cod locations to the hydrophone array. The recorded cod vocalization frequencies range from ∼20 to 600 Hz with a peak power frequency of ∼60 Hz, average duration of 300 ms, and mean source level of 163.5 ± 7.9 dB re 1 μPa at 1 m. Spatial dependence of received cod vocalization rates is estimated using hydrophone array measurements as the array is towed horizontally from deeper surrounding waters to shallow water inlet areas of the experimental site. The bathymetric-dependent probability of detection regions for cod vocalizations are quantified and are found to be significantly reduced in shallow-water areas of the inlet. We show that the towable hydrophone array deployed from a moving vessel is invaluable because it can survey cod vocalization activity at multiple locations, providing continuous spatial coverage that is complementary to fixed sensor systems that provide continuous temporal coverage at a given location.
A large variety of sound sources in the ocean, including biological, geophysical and man-made activities can be simultaneously monitored over instantaneous continental-shelf scale regions via the passive ocean acoustic waveguide remote sensing (POAWRS) technique by employing a large-aperture densely-sampled coherent hydrophone array. Millions of acoustic signals received on the POAWRS system per day can make it challenging to identify individual sound sources. An automated classification system is necessary to enable sound sources to be recognized. Here a large training data set of fin whale and other vocalizations are gathered after manual inspection and labelling. Next, multiple classifiers including neural networks, logistic regression, support vector machine (SVM) and decision tree are built and tested for identifying the fin whale and other vocalizations from the enormous amounts of acoustic signals detected per day. The neural network classifier will use beamformed spectrograms to classify acoustic signals, while logistic regression, SVM, and decision tree classifiers will use multiple features extracted from each detection to perform classification. The multiple features extracted from each detection include mean, minimum, and maximum frequencies, bandwidth, signal duration, frequency-time slope, and curvature. The performance of the classifiers are evaluated and compared using multiple values including accuracy, precision, recall, and F1-score.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.