Animal dimensions are essential indicators for monitoring their growth rate, diet efficiency, and health status. A computer vision system is a recently emerging precision livestock farming technology that overcomes the previously unresolved challenges pertaining to labor and cost. Depth sensor cameras can be used to estimate the depth or height of an animal, in addition to two-dimensional information. Collecting top-view depth images is common in evaluating body mass or conformational traits in livestock species. However, in the depth image data acquisition process, manual interventions are involved in controlling a camera from a laptop or where detailed steps for automated data collection are not documented. Furthermore, open-source image data acquisition implementations are rarely available. The objective of this study was to 1) investigate the utility of automated top-view dairy cow depth data collection methods using picture- and video-based methods, 2) evaluate the performance of an infrared cut lens, 3) and make the source code available. Both methods can automatically perform animal detection, trigger recording, capture depth data, and terminate recording for individual animals. The picture-based method takes only a predetermined number of images whereas the video-based method uses a sequence of frames as a video. For the picture-based method, we evaluated 3- and 10-picture approaches. The depth sensor camera was mounted 2.75 m above-the-ground over a walk-through scale between the milking parlor and the free-stall barn. A total of 150 Holstein and 100 Jersey cows were evaluated. A pixel location where the depth was monitored was set up as a point of interest. More than 89% of cows were successfully captured using both picture- and video-based methods. The success rates of the picture- and video-based methods further improved to 92% and 98%, respectively, when combined with an infrared cut lens. Although both the picture-based method with 10 pictures and the video-based method yielded accurate results for collecting depth data son cows, the former was more efficient in terms of data storage. The current study demonstrates automated depth data collection frameworks and a Python implementation available to the community, which can help facilitate the deployment of computer vision systems for dairy cows.
Animal dimensions are important indicators for monitoring the growth rates, diet efficiency, and health status of animals. A computer vision system is one of the recently emerging precision livestock farming technologies to overcome the previously unresolved challenges pertaining to labor and cost. It is common practice to collect top-view depth images for evaluating body mass or conformation traits. However, in the depth image data acquisition process, some manual interventions are often involved for controlling a camera from a laptop computer or detailed steps for automated data collection are not described. The objective of this study was to evaluate the utility of automated top-view dairy cow depth data collection methods using the picture- and video-based methods. Both methods automatically performed animal detection, trigger recording, capture depth data, and stop recording. The depth sensor camera was mounted 2.74 m above the ground over a free walk-through path between the milking parlor and feeding area. A total of 150 Holsteins and 100 Jersey cows were evaluated. A pixel location that depth is monitored was set up as a point-of-interest. The picture-based method takes the only pre-determined number of frames instead, while the video-based method takes a sequence of frames as a video. More than 90% of cows were successfully captured in both picture- and video-based methods. The success rate further improved to 98% when the video-based method was combined with the infrared cut lens. Overall, the video-based method yielded more accurate results for collecting depth data on cows. However, the picture-based method can be improved by increasing the number of pictures taken. We also discuss the utility of the developed automated system for evaluating body weight and body condition scores.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.