2019
DOI: 10.3390/ijgi8110483
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient and Scene-Adaptive Algorithm for Vehicle Detection in Aerial Images Using an Improved YOLOv3 Framework

Abstract: Vehicle detection in aerial images has attracted great attention as an approach to providing the necessary information for transportation road network planning and traffic management. However, because of the low resolution, complex scene, occlusion, shadows, and high requirement for detection efficiency, implementing vehicle detection in aerial images is challenging. Therefore, we propose an efficient and scene-adaptive algorithm for vehicle detection in aerial images using an improved YOLOv3 framework, and it… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0
2

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 24 publications
0
5
0
2
Order By: Relevance
“…Zheng et al [22] augmented the training dataset with synthesized vehicle images and improved performance. Zhang et al [23] adopted the YOLOv3 framework and proposed a scene-adaptive feature map fusion algorithm for effective vehicle detection in video data.…”
Section: Vehicle Detection By Deep Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Zheng et al [22] augmented the training dataset with synthesized vehicle images and improved performance. Zhang et al [23] adopted the YOLOv3 framework and proposed a scene-adaptive feature map fusion algorithm for effective vehicle detection in video data.…”
Section: Vehicle Detection By Deep Learningmentioning
confidence: 99%
“…2020, 12, 575 4 of 24 and did not enhance the background features that would be significantly different in a different area. The method of Zhang et al [23] is expected to be relatively robust to image feature difference because their YOLOv3-based architecture includes FPN as a component, and FPN effectively captures semantic features that would be robust to low-level image feature differences. However, this is not a direct solution, and thus, it would not fully solve the problem (We will confirm this by using M2Det architecture in Section 4.6).…”
Section: Vehicle Detection By Deep Learningmentioning
confidence: 99%
“…[69] Development of the Deep Vehicle Counting Framework based on Enhanced-SSD [70] Comparison between YOLOv3 (best model) and Faster R-CNN [71] Detection model based on two CNNs that adopt the VGG-16 model [72] EOVNet (Earth observation image-based vehicle detection network), a modified Faster R-CNN. [1] Improved Faster R-CNN with Multiscale Feature Fusion and Homography Augmentation [73] R3-Net a deep network for multi-oriented vehicle detection [74] Detection algorithm based on Faster R-CNN [75] Systematic investigation of the Fast R-CNN and Faster R-CNN in vehicle detection [5] YOLOv3, vehicle tracking using deep appearance features, and Kalman filtering for motion estimation [7] Model based on multi-task cost-sensitive-convolutional neural network (MTCS-CNN) [76] Novel double focal loss convolutional neural network (DFLCNN) [77] Improved YOLOv3 using a sloping bounding box attached to the angle of the target vehicles [78] Orientation-aware feature fusion single-stage detection (OAFF-SSD) [15] Detection model for different scales using CNN and proposition of an Outlier-Aware Non-Maximum Suppression. [79] Comparison among faster R-CNN, R-FCN, and SSD (Best model) [80] Optimized DL model considering feature extraction, object detection, and non-maximum suppression.…”
Section: Papermentioning
confidence: 99%
“…Won et al proposed increasing the recognition speed by decreasing the Darknet-53 to 24 layers [ 34 ]. Zhang and Zhu introduced the sloping anchor box to overcome the flaws of the traditional horizontal bounding box, which is intended to predict the target position and angle [ 35 ].…”
Section: Related Workmentioning
confidence: 99%