2022
DOI: 10.3934/mbe.2022602
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight tea bud recognition network integrating GhostNet and YOLOv5

Abstract: <abstract> <p>Aiming at the problems of low detection accuracy and slow speed caused by the complex background of tea sprouts and the small target size, this paper proposes a tea bud detection algorithm integrating GhostNet and YOLOv5. To reduce parameters, the GhostNet module is specially introduced to shorten the detection speed. A coordinated attention mechanism is then added to the backbone layer to enhance the feature extraction ability of the model. A bi-directional feature pyramid network… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
22
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(23 citation statements)
references
References 30 publications
1
22
0
Order By: Relevance
“…CNN is made up of an input layer, an output layer, and several hidden layers. The YOLO architecture is made up of only two layers: a convolution layer and a pooling layer [ 40 ].…”
Section: Methodsmentioning
confidence: 99%
“…CNN is made up of an input layer, an output layer, and several hidden layers. The YOLO architecture is made up of only two layers: a convolution layer and a pooling layer [ 40 ].…”
Section: Methodsmentioning
confidence: 99%
“…In a dataset of 4792 images [9], 4.6k images are for model training (95%) and 240 (5%) for model validation. All experiments are separately run on the Google Colab environment with an NVIDIA Tesla T4 GPU having a low learning rate of 0.001, where GAANet and GhostNet-YOLOv5 [24] has a batch size of 256 and 512, respectively. The epochs for both models are set to 500 for both models where GhostNet-YOLOv5 [24] stopped training at 300 by using early stopping as the model performance stopped improving while GAANet stopped training at 457 epochs.…”
Section: A Dataset and Model Trainingmentioning
confidence: 99%
“…The detailed evaluation of both trained models GAANet and GhostNet-YOLOv5 [24] is performed by the comparison of true positive (TP), true negative (TN), false negative (FN), false positive (FP), mAP, precision, and recall values. The GAANet model achieved the highest TP value of 1.00 for drones and planes and lowest TP of 0.72 for helicopters.…”
Section: B Evaluation and Comparison Of Trained Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…To solve the problems of insufficient accuracy and lack of robustness in the process of insulator defect fault detection, this paper proposes a YOLOv5 based on a receptive field module and multiscale. The main work is as follows: anchor frames are obtained that match the size of the detected target by k-means clustering to improve the detection accuracy of the network for target objects with different proportions; the low-level detail features are extracted from the network and fused with the deepest semantic features to the small-scale detection layer designed in this paper to improve the detection performance of the network model for small-area targets; a lightweight backbone network is built using the GhostNet [26] lightweight network to reduce convolution operations and improve the real-time performance of the model while ensuring detection accuracy; the channel receptive field block (CRF) receptive field module that integrates channel information is designed at the network head to replace the original SPP module [27], integrate channel information, fuse multiscale feature information, and use dilated convolution to reduce the calculation of redundant information.…”
Section: Introductionmentioning
confidence: 99%