2021
DOI: 10.1016/j.compag.2020.105900
|View full text |Cite
|
Sign up to set email alerts
|

A novel green apple segmentation algorithm based on ensemble U-Net under complex orchard environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
39
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 75 publications
(39 citation statements)
references
References 32 publications
0
39
0
Order By: Relevance
“…However, deep learning methods are gradually being applied to agricultural research because they can automatically learn the deep feature information of images, and their speed and accuracy levels are greater than those of traditional algorithms [27][28][29][30]. Deep learning has also been applied to the detection of plant diseases from visible light images.…”
Section: Introductionmentioning
confidence: 99%
“…However, deep learning methods are gradually being applied to agricultural research because they can automatically learn the deep feature information of images, and their speed and accuracy levels are greater than those of traditional algorithms [27][28][29][30]. Deep learning has also been applied to the detection of plant diseases from visible light images.…”
Section: Introductionmentioning
confidence: 99%
“…To accurately locate the tomato fruit under the complex scenes, Liu improved the YOLOv3 one-stage target detection model to predict the circular areas ( Liu et al, 2020 ), and the precision and times of YOLO-Tomato were 94.75% and 54 ms. Jia optimized the Mask R-CNN to adapt to the detection of apple targets, where the residual neural network (ResNet) and dense convolutional network (DenseNet) were combined as the feature extraction network of the original model that the precision and recall rate have reached 97.31 and 95.70%, respectively ( Jia et al, 2020b ). Li proposed an ensemble U-Net segmentation model, and the high-level semantic features of U-Net and the edge features of Edge were integrated to retain multi-scale contextual information and realize efficient segmentation of target fruit ( Li et al, 2021 ), where the recognition rate reached 95.11% and the recognition speed was 0.39 s. A modified YOLOv3 model based on clustering optimization is designed, and the influence of front-lighting and backlighting is clarified to detect and recognize banana fruits, inflorescence axes, and flower buds by Wu et al (2021) . To recognize and detect plant diseases, Chen et al (2022) proposed an improved plant disease-recognition model based on the YOLOv5 network model via a new involution bottleneck module, an SE module, and an efficient intersection over union loss function to optimize the performance of target detection, where mean average precision reached 70%.…”
Section: Introductionmentioning
confidence: 99%
“…The proposed fusion is effective in the system structure and the training process on classification results ( Woźniak and Połap, 2018 ). Li et al (2021) optimized the U-Net model by combining the spatial pyramid pooling (ASPP) structure and merging U-Net’s edge features and advanced functions. In addition, this model obtained the semantic boundary information of object fruit images by integrating the residual module and closed convolution, which effectively improved the segmentation accuracy of the object fruit ( Li et al, 2021 ).…”
Section: Introductionmentioning
confidence: 99%
“… Li et al (2021) optimized the U-Net model by combining the spatial pyramid pooling (ASPP) structure and merging U-Net’s edge features and advanced functions. In addition, this model obtained the semantic boundary information of object fruit images by integrating the residual module and closed convolution, which effectively improved the segmentation accuracy of the object fruit ( Li et al, 2021 ). Xiong et al (2020) used the YOLOv2 model to detect green mango images based on mango images collected by UAVs, and the detection accuracy is only 1.1% compared with the manual measurement error.…”
Section: Introductionmentioning
confidence: 99%