2023
DOI: 10.1016/j.atech.2023.100231
|View full text |Cite
|
Sign up to set email alerts
|

Performance evaluation of YOLO v5 model for automatic crop and weed classification on UAV images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 42 publications
(12 citation statements)
references
References 32 publications
0
4
0
Order By: Relevance
“…The maximum number of epochs used in the training of YOLOv8n was 100, indicated as the default parameter by [57]. Nevertheless, due to the linear decrease of the classification loss curve of YOLOv8n, as shown in Figure 10, more training epochs may be required to achieve significant results (100-600 epochs), as mentioned by [65]. YOLOv8 also underachieves in terms of speed, around 35% less than MobileNetV3, thus making MobileNetV3 the most accurate and fast embedded object detection model, followed by YOLOv8n and then the SqueezeNet model.…”
Section: Resultsmentioning
confidence: 99%
“…The maximum number of epochs used in the training of YOLOv8n was 100, indicated as the default parameter by [57]. Nevertheless, due to the linear decrease of the classification loss curve of YOLOv8n, as shown in Figure 10, more training epochs may be required to achieve significant results (100-600 epochs), as mentioned by [65]. YOLOv8 also underachieves in terms of speed, around 35% less than MobileNetV3, thus making MobileNetV3 the most accurate and fast embedded object detection model, followed by YOLOv8n and then the SqueezeNet model.…”
Section: Resultsmentioning
confidence: 99%
“…[Recall(k)−Recall(k+1)]: This expression represents the difference between two consecutive sensitivity values. This represents a horizontal slice on the precisionrecall curve [22].…”
Section: Resultsmentioning
confidence: 99%
“…Deep learning is centered on a far more complex image analysis process whereby meaningful features are automatically extracted from the raw input data, requiring relatively limited user input to develop, train, and evaluate the model to perform classifications. Deep learning models for weed mapping are usually based on some form of convolutional neural network (CNN), with the most popular example among the reviewed studies being the YOLO model [29,[41][42][43]. Despite their ability to produce highly accurate results and requiring relatively minimal user intervention, these models are complex, computationally demanding, and data-intensive, which may limit their feasibility for widespread crop and weed mapping applications.…”
Section: Algorithms and Methodologiesmentioning
confidence: 99%