Monitoring, recording, and predicting livestock body weight (BW) allows for timely intervention in diets and health, greater efficiency in genetic selection, and identification of optimal times to market animals because animals that have already reached the point of slaughter represent a burden for the feedlot. There are currently two main approaches (direct and indirect) to measure the BW in livestock. Direct approaches include partial-weight or full-weight industrial scales placed in designated locations on large farms that measure passively or dynamically the weight of livestock. While these devices are very accurate, their acquisition, intended purpose and operation size, repeated calibration and maintenance costs associated with their placement in high-temperature variability, and corrosive environments are significant and beyond the affordability and sustainability limits of small and medium size farms and even of commercial operators. As a more affordable alternative to direct weighing approaches, indirect approaches have been developed based on observed or inferred relationships between biometric and morphometric measurements of livestock and their BW. Initial indirect approaches involved manual measurements of animals using measuring tapes and tubes and the use of regression equations able to correlate such measurements with BW. While such approaches have good BW prediction accuracies, they are time consuming, require trained and skilled farm laborers, and can be stressful for both animals and handlers especially when repeated daily. With the concomitant advancement of contactless electro-optical sensors (e.g., 2D, 3D, infrared cameras), computer vision (CV) technologies, and artificial intelligence fields such as machine learning (ML) and deep learning (DL), 2D and 3D images have started to be used as biometric and morphometric proxies for BW estimations. This manuscript provides a review of CV-based and ML/DL-based BW prediction methods and discusses their strengths, weaknesses, and industry applicability potential.
Objective: Assessment of surgical skills is crucial for improving training standards and ensuring the quality of primary care. This study aimed to develop a gradient-boosting classification model to classify surgical expertise into inexperienced, competent, and experienced levels in robot-assisted surgery (RAS) using visual metrics. Methods: Eye gaze data were recorded from 11 participants performing 4 subtasks; blunt dissection, retraction, cold dissection, and hot dissection using live pigs and the da Vinci robot. Eye gaze data were used to extract the visual metrics. One expert RAS surgeon evaluated each participant’s performance and expertise level using the modified Global Evaluative Assessment of Robotic Skills (GEARS) assessment tool. The extracted visual metrics were used to classify surgical skill levels and to evaluate individual GEARS metrics. Analysis of Variance (ANOVA) was used to test the differences for each feature across skill levels. Results: Classification accuracies for blunt dissection, retraction, cold dissection, and burn dissection were 95%, 96%, 96%, and 96%, respectively. The time to complete only the retraction was significantly different among the 3 skill levels (P value = 0.04). Performance was significantly different for 3 categories of surgical skill level for all subtasks (P values < 0.01). The extracted visual metrics were strongly associated with GEARS metrics (R2 > 0.7 for GEARS metrics evaluation models). Conclusions: Machine learning algorithms trained by visual metrics of RAS surgeons can classify surgical skill levels and evaluate GEARS measures. The time to complete a surgical subtask may not be considered a stand-alone factor for skill level assessment.
Interest in reducing eructed CH 4 is growing, but measuring CH 4 emissions is expensive and difficult in large populations. In this study, we investigated the effectiveness of milk mid-infrared spectroscopy (MIRS) data to predict CH 4 emission in lactating Canadian Holstein cows. A total of 181 weekly average CH 4 records from 158 Canadian cows and 217 records from 44 Danish cows were used. For each milk spectra record, the corresponding weekly average CH 4 emission (g/d), test-day milk yield (MY, kg/d), fat yield (FY, g/d), and protein yield (PY, g/d) were available. The weekly average CH 4 emission was predicted using various artificial neural networks (ANN), partial least squares regression, and different sets of predictors. The ANN architectures consisted of 3 training algorithms, 1 to 10 neurons with hyperbolic tangent activation function in the hidden layer, and 1 neuron with linear (purine) activation function in the hidden layer. Random cross-validation was used to compared the predictor sets: MY (set 1); FY (set 2); PY (set 3); MY and FY (set 4); MY and PY (set 5); MY, FY, and PY (set 6); MIRS (set 7); and MY, FY, PY, and MIRS (set 8). All predictor sets also included age at calving and days in milk, in addition to country, season of calving, and lactation number as categorical effects. Using only MY (set 1), the predictive accuracy (r) ranged from 0.245 to 0.457 and the root mean square error (RMSE) ranged from 87.28 to 99.39 across all prediction models and validation sets. Replacing MY with FY (set 2; r = 0.288-0.491; RMSE = 85.94-98.04) improved the predictive accuracy, but using PY (set 3; r = 0.260-0.468; RMSE = 86.95-98.47) did not. Adding FY to MY (set 4; r = 0.272-0.469; applying the calibrated models for large-scale prediction of CH 4 emissions.
development, 14 cases were used for internal validation, and 23 were used as a separate testing cohort for evaluating algorithm accuracy. Concordance between AI-enabled automated video analysis and manual human video annotation was 87.6%. Algorithm accuracy was highest for the vesicourethral anastomosis step (98.6%) and lowest for the final inspection and extraction step (63.0%).CONCLUSIONS: We present results of an AI-enabled computer vision algorithm for automated annotation of full-length RARP surgical video. Automated surgical video analysis has practical applications in retrospective video review by surgeons, surgical training, quality assessment, and for the development of future algorithms to associate perioperative and long-term outcomes with intraoperative surgical events.
Dry matter intake (DMI) is a fundamental component of the animal's feed efficiency, but measuring DMI of individual cows is expensive. Mid-infrared reflectance spectroscopy (MIRS) on milk samples could be an inexpensive alternative to predict DMI. The objectives of this study were (1) to assess if milk MIRS data could improve DMI predictions of Canadian Holstein cows using artificial neural networks (ANN); (2) to investigate the ability of different ANN architectures to predict unobserved DMI; and (3) to validate the robustness of developed prediction models. A total of 7,398 milk samples from 509 dairy cows distributed over Canada, Denmark, and the United States were analyzed. Data from Denmark and the United States were used to increase the training data size and variability to improve the generalization of the prediction models over the lactation. For each milk spectra record, the corresponding weekly average DMI (kg/d), test-day milk yield (MY, kg/d), fat yield (FY, g/d), and protein yield (PY, g/d), metabolic body weight (MBW), age at calving, year of calving, season of calving, days in milk, lactation number, country, and herd were available. The weekly average DMI was predicted with various ANN architectures using 7 predictor sets, which were created by different combinations MY, FY, PY, MBW, and MIRS data. All predictor sets also included age of calving and days in milk. In addition, the classification effects of season of calving, country, and lactation number were included in all models. The explored ANN architectures consisted of 3 training algorithms (Bayesian regularization, Levenberg-Marquardt, and scaled conjugate gradient), 2 types of activation functions (hyperbolic tangent and linear), and from 1 to 10 neurons in hidden layers). In addition, partial least squares regression was also applied to predict the DMI. Models were compared using cross-validation based on leaving out 10% of records (validation A) and leaving out 10% of cows (validation B). Superior fitting statistics of models comprising MIRS information compared with the models fitting milk, fat and protein yields suggest that other unknown milk components may help explain variation in weekly average DMI. For instance, using MY, FY, PY, and MBW as predictor variables produced a predictive accuracy (r) ranging from 0.510 to 0.652 across ANN models and validation sets. Using MIRS together with MY, FY, PY, and MBW as predictors resulted in improved fitting (r = 0.679-0.777). Including MIRS data improved the weekly average DMI prediction of Canadian Holstein cows, but it seems that MIRS predicts DMI mostly through its association with milk production traits and its utility to estimate a measure of feed efficiency that accounts for the level of production, such as residual feed intake, might be limited and needs further investigation. The better predictive ability of nonlinear ANN compared with linear ANN and partial least squares regression indicated possible nonlinear relationships between weekly average DMI and the predictor variables. In ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.