A key challenge for automated orchard management robots is the rapid and accurate identification of crop growth and maturity conditions for subsequent operations, such as automatic pollination, fertilization, and picking. In particular, strawberries have a short ripening period and the fruits are heavily overlapped and shaded by each other, which is time-consuming and inefficient under traditional detection methods. Therefore, we designed and developed a strawberry growth detection algorithm, SDNet (Strawberry Detect Net). The algorithm is based on the YOLOX model and replaces the original CSP block in the backbone network with a self-designed feature extraction module C3HB block to improve the spatial interaction capability and monitoring accuracy of the detection algorithm; Then, the normalized attention module (NAM) is embedded in the neck to improve the detection accuracy and attention weight of small target fruits; and we use the latest SIOU objective loss function to improve the prediction accuracy of the detection model, which finally achieves the monitoring of strawberry fruits under five growth states. The experimental results show that the mAP, precision, and recall of SDNet are 94.26%, 93.15%, and 90.72%, respectively, and the monitoring speed is 30.5 ms. It is 4.08%, 3.64 and 2.04% higher than the precision, accuracy, and recall of YOLOX, respectively, and there is no significant change in the model size. The research results can effectively solve the problem of low accuracy of strawberry fruit growth state monitoring under complex environments, and provide important technical reference for realizing unmanned farm and precision agriculture.INDEX TERMS Fruit detection,Object detection,Real-time counting,Digital agriculture.
The efficient detection of grapes is a crucial technology for fruit-picking robots. To better identify grapes from branch shading that is similar to the fruit color and improve the detection accuracy of green grapes due to cluster adhesion, this study proposes a Shine-Muscat Grape Detection Model (S-MGDM) based on improved YOLOv3 for the ripening stage. DenseNet is fused in the backbone feature extraction network to extract richer underlying grape information; depth-separable convolution, CBAM, and SPPNet are added in the multi-scale detection module to increase the perceptual field of grape targets and reduce the model computation; meanwhile, PANet is combined with FPN to promote inter-network information flow and iteratively extract grape features. In addition, the CIOU regression loss function is used and the prior frame size is modified by the k-means algorithm to improve the accuracy of detection. The improved detection model achieves an AP value of 96.73% and an F1 value of 91% on the test set, which are 3.87% and 3% higher than the original network model, respectively; the average detection speed under GPU reaches 26.95 frames/s, which is 6.49 frames/s higher than the original model. The comparison results with several mainstream detection algorithms such as SSD and YOLO series show that the method has excellent detection accuracy and good real-time performance, which is an important reference value for the problem of accurate identification of Shine-Muscat grapes at maturity.
Crop diseases have an important impact on the safe production of food. Therefore, the automated identification of pre-crop diseases is very important for farmers to increase production and income. In this paper, a tomato leaf disease identification method based on the optimized MobileNetV2 model is proposed. A dataset of 20,400 tomato disease images was created based on tomato disease images taken from the greenhouse and obtained from the PlantVillage database. The optimized MobileNetV2 model was trained with the dataset to obtain a classification model for tomato leaf diseases. The average recognition accuracy of the model is 98.3% and the recall rate is 94.9%, which is 1.2% and 3.9% higher than the original model, respectively, after experimental validation. The average prediction speed of the model for a single image is about 76 ms, which is 2.94% better than the original model. To verify the performance of the optimized MobileNetV2 model, it was compared with the Xception, Inception, and VGG16 feature extraction network models using migration learning, respectively. The experimental results show that the average recognition accuracy of the model is 0.4 to 2.4 percentage points higher than that of the Xception, Inception, and VGG16 models. It can provide technical support for the identification of tomato diseases, and is also important for plant growth monitoring under precision agriculture.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.