2022
DOI: 10.3389/fpls.2022.972445
|View full text |Cite
|
Sign up to set email alerts
|

Detection and localization of citrus fruit based on improved You Only Look Once v5s and binocular vision in the orchard

Abstract: Intelligent detection and localization of mature citrus fruits is a critical challenge in developing an automatic harvesting robot. Variable illumination conditions and different occlusion states are some of the essential issues that must be addressed for the accurate detection and localization of citrus in the orchard environment. In this paper, a novel method for the detection and localization of mature citrus using improved You Only Look Once (YOLO) v5s with binocular vision is proposed. First, a new loss f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(5 citation statements)
references
References 29 publications
(34 reference statements)
0
5
0
Order By: Relevance
“…We embedded the GSConv module into the feature fusion stage, allowing us to reduce the number of parameters while maintaining high accuracy in our model. We did not use GSConv in the neck network because it would lead to deeper layers of the neck network, and a deeper network would exacerbate the resistance to spatial information flow ( Hou et al., 2022 ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We embedded the GSConv module into the feature fusion stage, allowing us to reduce the number of parameters while maintaining high accuracy in our model. We did not use GSConv in the neck network because it would lead to deeper layers of the neck network, and a deeper network would exacerbate the resistance to spatial information flow ( Hou et al., 2022 ).…”
Section: Methodsmentioning
confidence: 99%
“…Continuously refining deep learning methods allows for better detection of obscured objects. Hou et al. (2022) utilized the YOLOv5s method improved by binocular vision to detect and locate mature citrus fruits under uniform lighting, achieving a recall rate of 98%.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, the spatial position and attitude information of the citrus were obtained based on the camera imaging model and the geometrical features of the citrus. The experimental results showed that the recall of citrus detection under non-uniform lighting conditions, weak lighting and lighting conditions were 99.55%, 98.47% and 98.48% ( Hou et al., 2022 ). Sadaf Zeeshan et al.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, the spatial position and attitude information of the citrus were obtained based on the camera imaging model and the geometrical features of the citrus. The experimental results showed that the recall of citrus detection under non-uniform lighting conditions, weak lighting and lighting conditions were 99.55%, 98.47% and 98.48% (Hou et al, 2022). Sadaf Zeeshan et al (2023) proposed a deep learning convolutional neural network model for orange fruit detection using a generic real-time dataset for detecting oranges in complex dynamic environments.…”
Section: Introductionmentioning
confidence: 99%
“…Zheng et al [41] pruned the backbone of YOLOv4 and removed the redundant portion of the neck network to propose the YOLO BP green citrus detection algorithm with an average precision of 91.55%. Huo et al [42] used "Shantanju" citrus collected from Conghua, Guangzhou, and improved the YOLOv5s algorithm to detect and locate mature citrus. The recall rates under uneven, weak, and good lighting were 99.55%, 98.47%, and 98.…”
Section: Introductionmentioning
confidence: 99%