Detecting litchis in a complex natural environment is important for yield estimation and provides reliable support to litchi-picking robots. This paper proposes an improved litchi detection model named YOLOv5-litchi for litchi detection in complex natural environments. First, we add a convolutional block attention module to each C3 module in the backbone of the network to enhance the ability of the network to extract important feature information. Second, we add a small-object detection layer to enable the model to locate smaller targets and enhance the detection performance of small targets. Third, the Mosaic-9 data augmentation in the network increases the diversity of datasets. Then, we accelerate the regression convergence process of the prediction box by replacing the target detection regression loss function with CIoU. Finally, we add weighted-boxes fusion to bring the prediction boxes closer to the target and reduce the missed detection. An experiment is carried out to verify the effectiveness of the improvement. The results of the study show that the mAP and recall of the YOLOv5-litchi model were improved by 12.9% and 15%, respectively, in comparison with those of the unimproved YOLOv5 network. The inference speed of the YOLOv5-litchi model to detect each picture is 25 ms, which is much better than that of Faster-RCNN and YOLOv4. Compared with the unimproved YOLOv5 network, the mAP of the YOLOv5-litchi model increased by 17.4% in the large visual scenes. The performance of the YOLOv5-litchi model for litchi detection is the best in five models. Therefore, YOLOv5-litchi achieved a good balance between speed, model size, and accuracy, which can meet the needs of litchi detection in agriculture and provides technical support for the yield estimation and litchi-picking robots.
It is necessary to develop automatic picking technology to improve the efficiency of litchi picking, and the accurate segmentation of litchi branches is the key that allows robots to complete the picking task. To solve the problem of inaccurate segmentation of litchi branches under natural conditions, this paper proposes a segmentation method for litchi branches based on the improved DeepLabv3+, which replaced the backbone network of DeepLabv3+ and used the Dilated Residual Networks as the backbone network to enhance the model’s feature extraction capability. During the training process, a combination of Cross-Entropy loss and the dice coefficient loss was used as the loss function to cause the model to pay more attention to the litchi branch area, which could alleviate the negative impact of the imbalance between the litchi branches and the background. In addition, the Coordinate Attention module is added to the atrous spatial pyramid pooling, and the channel and location information of the multi-scale semantic features acquired by the network are simultaneously considered. The experimental results show that the model’s mean intersection over union and mean pixel accuracy are 90.28% and 94.95%, respectively, and the frames per second (FPS) is 19.83. Compared with the classical DeepLabv3+ network, the model’s mean intersection over union and mean pixel accuracy are improved by 13.57% and 15.78%, respectively. This method can accurately segment litchi branches, which provides powerful technical support to help litchi-picking robots find branches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.