This article proposes a real-time classification and detection method for mutton parts based on a single shot detector (SSD). We acquired 9,000 images of various parts of mutton in a sheep slaughtering workshop, characterized by multiple classes and multiple samples. After image preprocessing, an image dataset of the mutton parts was established for later model training. Subsequently, we introduced transfer learning to train an SSD-VGG network and obtain the optimal model. The optimal model was then applied to determine the category and position of each mutton part in the image, thus realizing the classification and detection of mutton parts. In this method, the average accuracy mAP and average processing time for a single image are selected as the accuracy and speed indicators, respectively, for judging the detection performance of the model. The feature extraction network VGG is replaced with MobileNetV1 to optimize the real-time performance of the SSD. Furthermore, we set an additional illumination dataset with two brightness levels "bright" and "dark" to verify the generalization ability of the optimized model. Finally, four common object detection algorithms, namely YoloV3-MobileNetV1, YoloV3-DarkNet53, Fast-RCNN, and Cascade-RCNN, are introduced to perform comparative experiments on mutton image datasets. The test results prove that the SSD-MobileNetV1 exhibits high accuracy and good real-time performance, with a certain generalization ability. It has a better comprehensive detection ability than other methods and can provide technical support for mutton processing. Practical ApplicationsCurrently, in the processing of mutton, the multiple parts of mutton are identified and sorted manually, which is time-consuming and laborious, and there are certain hidden food safety hazards. A deep-learning-based object detection method can solve the above problems effectively. Therefore, this study uses SSD to perform an accurate real-time recognition of the multiple parts of mutton from its images and provide a visual guidance for mutton sorting robots. It can also aid further research in the slaughtering and processing of other meat.
How to realize the accurate recognition of 3 parts of sheep carcass is the key to the research of mutton cutting robots. The characteristics of each part of the sheep carcass are connected to each other and have similar features, which make it difficult to identify and detect, but with the development of image semantic segmentation technology based on deep learning, it is possible to explore this technology for real-time recognition of the 3 parts of the sheep carcass. Based on the ICNet, we propose a real-time semantic segmentation method for sheep carcass images. We first acquire images of the sheep carcass and use augmentation technology to expand the image data, after normalization, using LabelMe to annotate the image and build the sheep carcass image dataset. After that, we establish the ICNet model and train it with transfer learning. The segmentation accuracy, MIoU, and the average processing time of single image are then obtained and used as the evaluation standard of the segmentation effect. In addition, we verify the generalization ability of the ICNet for the sheep carcass image dataset by setting different brightness image segmentation experiments. Finally, the U-Net, DeepLabv3, PSPNet, and Fast-SCNN are introduced for comparative experiments to further verify the segmentation performance of the ICNet. The experimental results show that for the sheep carcass image datasets, the segmentation accuracy and MIoU of our method are 97.68% and 88.47%, respectively. The single image processing time is 83 ms. Besides, the MIoU of U-Net and DeepLabv3 is 0.22% and 0.03% higher than the ICNet, but the processing time of a single image is longer by 186 ms and 430 ms. Besides, compared with the PSPNet and Fast-SCNN, the MIoU of the ICNet model is increased by 1.25% and 4.49%, respectively. However, the processing time of a single image is shorter by 469 ms and expands by 7 ms, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.