2023
DOI: 10.1016/j.compag.2023.107760
|View full text |Cite
|
Sign up to set email alerts
|

Acoustic sensors for automated detection of cow vocalization duration and type

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 30 publications
0
10
0
Order By: Relevance
“…As shown in Table 2 , both the explainable and DL models were able to accurately classify between the low- and high-frequency calls, with 87.2 and 89.4% accuracy, respectively. This outcome slightly outperforms (2%, 4.4%) the current state-of-the-art model ( 33 ), which used a smaller dataset of n = 10 individuals. Notably, the differences between the models’ performances between the training and testing cohorts was around 2.5% toward the training cohort, compared to the state-of-the-art, which reports a 14.2% difference.…”
Section: Discussionmentioning
confidence: 73%
See 2 more Smart Citations
“…As shown in Table 2 , both the explainable and DL models were able to accurately classify between the low- and high-frequency calls, with 87.2 and 89.4% accuracy, respectively. This outcome slightly outperforms (2%, 4.4%) the current state-of-the-art model ( 33 ), which used a smaller dataset of n = 10 individuals. Notably, the differences between the models’ performances between the training and testing cohorts was around 2.5% toward the training cohort, compared to the state-of-the-art, which reports a 14.2% difference.…”
Section: Discussionmentioning
confidence: 73%
“…Methods of studying animal vocal communication are becoming increasingly automated, with a growing body of research validating the use of both hardware and software that are capable of automatically collecting and processing bioacoustics data [reviewed by Mcloughlin et al ( 18 )]. In this vein, Shorten and Hunter ( 33 ) found significant variability in cattle vocalization parameters, and suggested that such traits can be monitored using animal-attached acoustic sensors in order to provide information on the welfare and emotional state of the animal. Therefore, automated vocalization monitoring could prove to be a useful tool in precision livestock farming ( 18 , 34 , 35 ), especially as dairy farming systems become increasingly automated with wide-scale use of milking and feeding robots, all this having the potential to dynamically adjust the management practices while the number of animals per farm unit tends to increase.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…As shown in Table 2, both the explainable and DL models were able to accurately classify between the low-and highfrequency calls, with 87.2% and 89.4% accuracy, respectively. This outcome slightly outperforms (2%, 4,4%) the current state-of-the-art model 33 , which used a smaller dataset of n=10 individuals. Notably, the differences between the models' performances between the training and testing cohorts was around 2.5% towards the training cohort, compared to the stateof-the-art which reports a 14.2% difference.…”
Section: Discussionmentioning
confidence: 75%
“…Machine learning techniques are therefore increasingly applied in the study of cattle vocalizations. Some tasks that have been addressed to date include classification of high vs. low frequency calls 33 , ingestive behaviour 35 , and categorization of calls such as oestrus and coughs 34 .…”
Section: Introductionmentioning
confidence: 99%