Backfat thickness (BF) is closely related to the service life and reproductive performance of sows. The dynamic monitoring of sows’ BF is a critical part of the production process in large-scale pig farms. This study proposed the application of a hybrid CNN-ViT (Vision Transformer, ViT) model for measuring sows’ BF to address the problems of high measurement intensity caused by the traditional contact measurement of sows’ BF and the low efficiency of existing non-contact models for measuring sows’ BF. The CNN-ViT introduced depth-separable convolution and lightweight self-attention, mainly consisting of a Pre-local Unit (PLU), a Lightweight ViT (LViT) and an Inverted Residual Unit (IRU). This model could extract local and global features of images, making it more suitable for small datasets. The model was tested on 106 pregnant sows with seven randomly divided datasets. The results showed that the CNN-ViT had a Mean Absolute Error (MAE) of 0.83 mm, a Root Mean Square Error (RMSE) of 1.05 mm, a Mean Absolute Percentage Error (MAPE) of 4.87% and a coefficient of determination (R-Square, R2) of 0.74. Compared to LviT-IRU, PLU-IRU and PLU-LviT, the CNN-ViT’s MAE decreased by more than 12%, RMSE decreased by more than 15%, MAPE decreased by more than 15% and R² improved by more than 17%. Compared to the Resnet50 and ViT, the CNN-ViT’s MAE decreased by more than 7%, RMSE decreased by more than 13%, MAPE decreased by more than 7% and R2 improved by more than 15%. The method could better meet the demand for the non-contact automatic measurement of pregnant sows’ BF in actual production and provide technical support for the intelligent management of pregnant sows.
Since sow backfat thickness (BFT) is highly correlated with its service life and reproductive effectiveness, dynamic monitoring of BFT is a critical component of large‐scale sow farm productivity. Existing contact measures of sow BFT have their problems including, high measurement intensity and sows' stress reaction, low biological safety, and difficulty in meeting the requirements for multiple measurements. This article presents a two‐dimensional (2D) image‐based approach for determining the BFT of pregnant sows when combined with the backfat growth rate (BGR). The 2D image features of sows extracted by convolutional neural networks (CNN) and the artificially defined phenotypic features of sows such as hip width, hip height, body length, hip height–width ratio, length–width ratio, and waist–hip ratio, were used respectively, combined with BGR, to construct a prediction model for sow BFT using support vector regression (SVR). Following testing and comparison, it was shown that using CNN to extract features from images could effectively replace artificially defined features, BGR contributed to the model's accuracy improvement. The CNN‐BGR‐SVR model performed the best, with R2 of 0.72 and mean absolute error of 1.21 mm, and root mean square error of 1.50 mm, and mean absolute percentage error of 7.57%. The results demonstrated that the CNN‐BGR‐SVR model based on 2D images was capable of detecting sow BFT, establishing a new reference for non‐contact sow BFT detection technology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.