2022
DOI: 10.3390/s22218221
|View full text |Cite
|
Sign up to set email alerts
|

AIE-YOLO: Auxiliary Information Enhanced YOLO for Small Object Detection

Abstract: Small object detection is one of the key challenges in the current computer vision field due to the low amount of information carried and the information loss caused by feature extraction. You Only Look Once v5 (YOLOv5) adopts the Path Aggregation Network to alleviate the problem of information loss, but it cannot restore the information that has been lost. To this end, an auxiliary information-enhanced YOLO is proposed to improve the sensitivity and detection performance of YOLOv5 to small objects. Firstly, a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(8 citation statements)
references
References 39 publications
0
8
0
Order By: Relevance
“…Machine vision has become a crucial research area in the field of autonomous driving. In object detection algorithms, the YOLOv5s algorithm utilizes convolutional neural networks to calculate the positions of objects to be recognized 22 24 , classifying and localizing them accurately. YOLOv5s is a high-accuracy neural network that surpasses the limitations of traditional image processing algorithms.…”
Section: Methodsmentioning
confidence: 99%
“…Machine vision has become a crucial research area in the field of autonomous driving. In object detection algorithms, the YOLOv5s algorithm utilizes convolutional neural networks to calculate the positions of objects to be recognized 22 24 , classifying and localizing them accurately. YOLOv5s is a high-accuracy neural network that surpasses the limitations of traditional image processing algorithms.…”
Section: Methodsmentioning
confidence: 99%
“…Furthermore, they made improvements to cluster center distance and loss function parameters, achieving a mAP of 92.03%. Yan et al [30] proposed a method based on YOLOv5 that incorporates multiple-scale receptive fields to capture image information and introduces attention branches to enhance feature expression, thereby addressing the issue of small target features, like pedestrians, tending to disappear. Dewi et al [31] trained enhanced datasets for traffic sign detection using generative adversarial networks, achieving a mAP of 84.9% for YOLOv3 and 89.33% for YOLOv4.…”
Section: Related Workmentioning
confidence: 99%
“…Wang et al [18] combined the small object detection layer with YOLOv4 network for traffic sign detection. Yan et al [19] combined attention with YOLOv5 and proposed a new model for traffic sign detection. These methods have inspired us tremendously.…”
Section: Yolov5smentioning
confidence: 99%