2017
DOI: 10.1109/lgrs.2017.2708722
|View full text |Cite
|
Sign up to set email alerts
|

M-FCN: Effective Fully Convolutional Network-Based Airplane Detection Framework

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(28 citation statements)
references
References 10 publications
0
28
0
Order By: Relevance
“…This dataset contains a total of 3775 object instances which are manually annotated with horizontal bounding boxes, including 757 airplanes, 390 baseball diamonds, 159 basketball courts, 124 bridges, 224 harbors, 163 ground track fields, 302 ships, 655 storage tanks, 524 tennis courts, and 477 vehicles. This dataset has been widely used in the earth observation community Cheng et al, 2018b;Farooq et al, 2017;Guo et al, 2018;Han et al, 2017a;Li et al, 2018;Yang et al, 2018b;Yang et al, 2017;Zhong et al, 2018). 4) VEDAI: The VEDAI (Razakarivony and Jurie, 2015) dataset is released for the task of multi-class vehicle detection in aerial images.…”
Section: Object Detection Datasets Of Optical Remote Sensing Imagesmentioning
confidence: 99%
See 1 more Smart Citation
“…This dataset contains a total of 3775 object instances which are manually annotated with horizontal bounding boxes, including 757 airplanes, 390 baseball diamonds, 159 basketball courts, 124 bridges, 224 harbors, 163 ground track fields, 302 ships, 655 storage tanks, 524 tennis courts, and 477 vehicles. This dataset has been widely used in the earth observation community Cheng et al, 2018b;Farooq et al, 2017;Guo et al, 2018;Han et al, 2017a;Li et al, 2018;Yang et al, 2018b;Yang et al, 2017;Zhong et al, 2018). 4) VEDAI: The VEDAI (Razakarivony and Jurie, 2015) dataset is released for the task of multi-class vehicle detection in aerial images.…”
Section: Object Detection Datasets Of Optical Remote Sensing Imagesmentioning
confidence: 99%
“…Object detection plays a crucial role in image interpretation and also is very important for a wide scope of applications, such as intelligent monitoring, urban planning, precision agriculture, and geographic information system (GIS) updating. Driven by this requirement, significant efforts have been made in the past few years to develop a variety of methods for object detection in optical remote sensing images (Aksoy, 2014;Bai et al, 2014;Cheng et al, 2013a;Cheng and Han, 2016;Cheng et al, 2013b;Cheng et al, 2014;Cheng et al, 2019;Cheng et al, 2016a;Das et al, 2011;Han et al, 2015;Han et al, 2014;Li et al, 2018;Long et al, 2017;Tang et al, 2017b;Yang et al, 2017;Zhang et al, 2016;Zhang et al, 2017;Zhou et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…However, the aforementioned models can not be directly utilized for geospatial object detection, because the properties of remote sensing images and natural images are different and the direct application of those models to remote sensing images is not optimal. Researchers have done a lot of work in applying CNN-based models to detect geospatial objects in remote sensing images and achieved remarkable consequences [4,[15][16][17][18][19][20][21][22][23][24][25]45]. For example, the work in [4] utilized a hyperregion proposal network (HRPN) and a cascade of boosted classifiers to detect vehicles in remote sensing images.…”
Section: Geospatial Object Detectionmentioning
confidence: 99%
“…Therefore, it is important for us to choose a method to extract features for object detection in remote sensing images. Currently, because of the advantage of directly generating more powerful feature representations from raw image pixels through neural networks, deep learning methods, especially CNN-based [4,[8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25], are recognized as predominate techniques for extracting features in object detection. Therefore, we select a CNN-based approach to extract features for object detection in optical remote sensing images.…”
Section: Introductionmentioning
confidence: 99%
“…Of these layers, convolutional layers can identify the local features of images; pooling layers compress numerous features to extract the main features; activation layers effectively express nonlinear features; and concatenation layers integrate features over multiple scales (LeCun et al 1990, 1998, 2015, Szegedy et al 2014. Through transmitting information between multiple convolutional layers, pooling layers, activation layers and concatenation layers, FCNs effectively integrate remote sensing data from multiple sources and features over multiple scales to identify objects accurately (Chen et al 2017, Yang et al 2017. Furthermore, FCNs have the ability to transfer leaning, which enables the extraction of dynamic information using parameters obtained from training samples in a single period.…”
Section: Introductionmentioning
confidence: 99%