2022
DOI: 10.1117/1.jei.31.4.043049
|View full text |Cite
|
Sign up to set email alerts
|

Improved YOLOv5-S object detection method for optical remote sensing images based on contextual transformer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(12 citation statements)
references
References 13 publications
0
12
0
Order By: Relevance
“…In the NWPU VHR-10 dataset, compared to Refs. 32 and 13, LRSNet shows a slightly lower mAP of 0.5%/0.2%. However, its Params(M) and GFLOPs are only 53.2%/52.7% and 21.0%/21% of theirs, enabling deployment on devices with more limited hardware resources.…”
Section: Experimental Results and Analysismentioning
confidence: 90%
See 1 more Smart Citation
“…In the NWPU VHR-10 dataset, compared to Refs. 32 and 13, LRSNet shows a slightly lower mAP of 0.5%/0.2%. However, its Params(M) and GFLOPs are only 53.2%/52.7% and 21.0%/21% of theirs, enabling deployment on devices with more limited hardware resources.…”
Section: Experimental Results and Analysismentioning
confidence: 90%
“…YOLOV3-Tiny, YOLOV4-Tiny, and YOLOv5n belong to the YOLO series, representing different versions of lightweight general-purpose object detection models. References 2832 and 13, on the other hand, denote lightweight networks specifically designed for remote sensing object detection. In addition, boldfaced and italicized values in the tables indicate the optimal and suboptimal results, respectively.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…Lang Lei [13] et al based on YOLOX-Tiny and relied on variable convolution to lightweight the remote sensing target detection model. Zhou Qi Kai [14] et al introduced the weighted two-way pyramid into YOLOv5s, which improved the accuracy of ship classification.…”
Section: Introductionmentioning
confidence: 99%
“…Low-quality images captured in severe weather conditions (e.g., dust, haze, and smoke) usually suffer from the problems of color shift and low contrast, which limit the performance of many computer vision algorithms, such as object tracking, 1 3 object detection, 4 , 5 and segmentation 6 , 7 . Therefore, it is necessary to study the single-image dehazing algorithm to improve the robustness of the computer vision algorithms.…”
Section: Introductionmentioning
confidence: 99%