Camellia oleifera fruits are randomly distributed in an orchard, and the fruits are easily blocked or covered by leaves. In addition, the colors of leaves and fruits are alike, and flowers and fruits grow at the same time, presenting many ambiguities. The large shock force will cause flowers to fall and affect the yield. As a result, accurate positioning becomes a difficult problem for robot picking. Therefore, studying target recognition and localization of Camellia oleifera fruits in complex environments has many difficulties. In this paper, a fusion method of deep learning based on visual perception and image processing is proposed to adaptively and actively locate fruit recognition and picking points for Camellia oleifera fruits. First, to adapt to the target classification and recognition of complex scenes in the field, the parameters of the You Only Live Once v7 (YOLOv7) model were optimized and selected to achieve Camellia oleifera fruits’ detection and determine the center point of the fruit recognition frame. Then, image processing and a geometric algorithm are used to process the image, segment, and determine the morphology of the fruit, extract the centroid of the outline of Camellia oleifera fruit, and then analyze the position deviation of its centroid point and the center point in the YOLO recognition frame. The frontlighting, backlight, partial occlusion, and other test conditions for the perceptual recognition processing were validated with several experiments. The results demonstrate that the precision of YOLOv7 is close to that of YOLOv5s, and the mean average precision of YOLOv7 is higher than that of YOLOv5s. For some occluded Camellia oleifera fruits, the YOLOv7 algorithm is better than the YOLOv5s algorithm, which improves the detection accuracy of Camellia oleifera fruits. The contour of Camellia oleifera fruits can be extracted entirely via image processing. The average position deviation between the centroid point of the image extraction and the center point of the YOLO recognition frame is 2.86 pixels; thus, the center point of the YOLO recognition frame is approximately considered to be consistent with the centroid point of the image extraction.
Accurate road extraction and recognition of roadside fruit in complex orchard environments are essential prerequisites for robotic fruit picking and walking behavioral decisions. In this study, a novel algorithm was proposed for unstructured road extraction and roadside fruit synchronous recognition, with wine grapes and nonstructural orchards as research objects. Initially, a preprocessing method tailored to field orchards was proposed to reduce the interference of adverse factors in the operating environment. The preprocessing method contained 4 parts: interception of regions of interest, bilateral filter, logarithmic space transformation and image enhancement based on the MSRCR algorithm. Subsequently, the analysis of the enhanced image enabled the optimization of the gray factor, and a road region extraction method based on dual-space fusion was proposed by color channel enhancement and gray factor optimization. Furthermore, the YOLO model suitable for grape cluster recognition in the wild environment was selected, and its parameters were optimized to enhance the recognition performance of the model for randomly distributed grapes. Finally, a fusion recognition framework was innovatively established, wherein the road extraction result was taken as input, and the optimized parameter YOLO model was utilized to identify roadside fruits, thus realizing synchronous road extraction and roadside fruit detection. Experimental results demonstrated that the proposed method based on the pretreatment could reduce the impact of interfering factors in complex orchard environments and enhance the quality of road extraction. Using the optimized YOLOv7 model, the precision, recall, mAP, and F1-score for roadside fruit cluster detection were 88.9%, 89.7%, 93.4%, and 89.3%, respectively, all of which were higher than those of the YOLOv5 model and were more suitable for roadside grape recognition. Compared to the identification results obtained by the grape detection algorithm alone, the proposed synchronous algorithm increased the number of fruit identifications by 23.84% and the detection speed by 14.33%. This research enhanced the perception ability of robots and provided a solid support for behavioral decision systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.