2021 14th International Symposium on Computational Intelligence and Design (ISCID) 2021
DOI: 10.1109/iscid52796.2021.00066
|View full text |Cite
|
Sign up to set email alerts
|

Research on Engineering Vehicle Target Detection in Aerial Photography Environment based on YOLOX

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 1 publication
0
6
0
Order By: Relevance
“…During model training, BCE(Binary Cross Entropy) loss is used for cls and obj trainning, and IoU loss is used for reg training. Compared with classical YOLO series models, YOLOX takes the following improvements [12,13].…”
Section: Yolox Theorymentioning
confidence: 99%
“…During model training, BCE(Binary Cross Entropy) loss is used for cls and obj trainning, and IoU loss is used for reg training. Compared with classical YOLO series models, YOLOX takes the following improvements [12,13].…”
Section: Yolox Theorymentioning
confidence: 99%
“…37 Furthermore, the effectiveness of some YOLOX-based object detection implementations has been confirmed. 48,49 Because of the comparatively high accuracy and Frames Per Second (FPS) of YOLOX-L, it was chosen as the neural network for dam surface crack detection among all sizes of YOLOX architectures.…”
Section: Yolox-based Dam Crack Detectionmentioning
confidence: 99%
“…With detection rates of up to 140 frames per second, YOLOX [26], which was introduced in 2021, stunned the globe and is a strong contender for real-time and mobile deployment scenarios. Without changing the target feature extraction network, the YOLOX-S version has been slightly improved for a few domains in the literature [27][28][29][30][31][32]. Feature extraction has also been improved by upgrading the FPN (feature pyramid networks), which has led to some gains in target recognition accuracy.…”
Section: Introductionmentioning
confidence: 99%