Cattle are less active than humans. Hence, it was hypothesized in this study that transmitting acceleration signals at a 1 min sampling interval to reduce storage load has the potential to improve the performance of motion sensors without affecting the precision of behavior classification. The behavior classification performance in terms of precision, sensitivity, and the F1-score of the 1 min serial datasets segmented in 3, 4, and 5 min window sizes based on nine algorithms were determined. The collar-fitted triaxial accelerometer sensor was attached on the right side of the neck of the two fattening Korean steers (age: 20 months) and the steers were observed for 6 h on day one, 10 h on day two, and 7 h on day three. The acceleration signals and visual observations were time synchronized and analyzed based on the objectives. The resting behavior was most correctly classified using the combination of a 4 min window size and the long short-term memory (LSTM) algorithm which resulted in 89% high precision, 81% high sensitivity, and 85% high F1-score. High classification performance (79% precision, 88% sensitivity, and 83% F1-score) was also obtained in classifying the eating behavior using the same classification method (4 min window size and an LSTM algorithm). The most poorly classified behavior was the active behavior. This study showed that the collar-fitted triaxial sensor measuring 1 min serial signals could be used as a tool for detecting the resting and eating behaviors of cattle in high precision by segmenting the acceleration signals in a 4 min window size and by using the LSTM classification algorithm.
Image processing systems are widespread with the digital transformation of artificial intelligence. Many researchers developed and tested several image classification models using machine learning and statistical techniques. Nevertheless, the current research seldom focuses on the quality assurance of these models. The existing methods lack to verify the quality assurance, with the lack of test cases to prepare the evaluation dataset to test the model, which can cause critical drawbacks in the nuclear field and defense system. In this paper, we discuss and suggest the preparation of the evaluation dataset using improved test cases through Cause-Effect Graphing. The proposed method can generate the evaluation dataset with automated test cases through the quantification method, which consists of 1) image characteristic selection 2) creating the Cause-Effect graphing approach of the image with the feature, and 3) generate all possible test coverage. The testing is performed with the COCO dataset, which shows the declining prediction accuracy with the adjusted brightness and sharpness ranging between-75 to 75%, which indicates the negligence of the important characteristics in the existing test dataset. The experiment shows the prediction fails while sharpness is less than the 0%, and the brightness fails at-75% with less number of detection object between-50% and 75%. This indicates that characteristic changes affects the prediction accuracy and the number of detected objects in an image. Our approach proves the importance of the characteristic selection process for the overall image to generate a more efficient model and increase the accuracy of object detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.