2021
DOI: 10.1016/j.compag.2021.106052
|View full text |Cite
|
Sign up to set email alerts
|

Improved multi-classes kiwifruit detection in orchard to avoid collisions during robotic picking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

3
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 70 publications
(21 citation statements)
references
References 32 publications
3
18
0
Order By: Relevance
“…From the trends of the convergence curve, the three models learned the object features well, and all the loss values after stabilization were less than 1, which shows that the models can be used in detection, similar to the literature [33]. We evaluate the training results and compare the AP values of the three detection models on banana bunches and stalks, and the mAP values of the entire model for all detection classes [33]. The two calculation formulas are (1) and ( 2):…”
Section: Model Evaluationsupporting
confidence: 76%
See 2 more Smart Citations
“…From the trends of the convergence curve, the three models learned the object features well, and all the loss values after stabilization were less than 1, which shows that the models can be used in detection, similar to the literature [33]. We evaluate the training results and compare the AP values of the three detection models on banana bunches and stalks, and the mAP values of the entire model for all detection classes [33]. The two calculation formulas are (1) and ( 2):…”
Section: Model Evaluationsupporting
confidence: 76%
“…It can be seen from the figure that the loss of YOLO-Banana is higher than the loss of YOLOv4 before stabilization, and the loss after stabilization is between YOLOv4 and YOLO-Banana-l4; although YOLO-Banana-l4 converges early, the decline rate of loss is the slowest, and the loss is higher than that of YOLO-Banana after 1300 iterations. From the trends of the convergence curve, the three models learned the object features well, and all the loss values after stabilization were less than 1, which shows that the models can be used in detection, similar to the literature [33].…”
Section: Model Evaluationsupporting
confidence: 75%
See 1 more Smart Citation
“…Monhollen et al [7] developed a corn kernel loss rate detection program based on Faster R-CNN, which achieved an average accuracy of 0.90, the additional field tests obtained the accuracy of 0.91. Suo et al [8] used yolov4 to study the transfer of kiwifruit detection, and obtained the highest mAP of 91.9% with an image processing speed of 25.5 ms. Zhang et al [9] proposed a water-meter pointerreading recognition method based on improved yolov4; the detection accuracy of this method reached 98.68%, which indicated that the lightweight algorithm could quickly and accurately identify targets. Li et al [10] proposed a rapid detection model for green pepper based on yolov4-tiny, the average precision is 95.11%, the model size is 30.9 MB, and the frame rate is 89 FPS.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, the one-stage recognition speed is relatively fast, so it is more suitable for mobile applications, because it can directly predict the category and location of objects through features extracted from the network. Commonly used algorithms include YOLOv3 (you only look once v3) [20][21][22], YOLOv4 [23,24], SSD [25], Retina-Net [26], etc. Hou et al [27] proposed a ginger shoot identification method based on YOLOv3, but this method only identifies ginger shoots, resulting in a complex process of calculating ginger shoot orientation, and it had a redundant backbone network.…”
Section: Introductionmentioning
confidence: 99%