2017
DOI: 10.3390/rs9030225
|View full text |Cite
|
Sign up to set email alerts
|

Transferring Pre-Trained Deep CNNs for Remote Scene Classification with General Features Learned from Linear PCA Network

Abstract: Abstract:Deep convolutional neural networks (CNNs) have been widely used to obtain high-level representation in various computer vision tasks. However, in the field of remote sensing, there are not sufficient images to train a useful deep CNN. Instead, we tend to transfer successful pre-trained deep CNNs to remote sensing tasks. In the transferring process, generalization power of features in pre-trained deep CNNs plays the key role. In this paper, we propose two promising architectures to extract general feat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0
1

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 56 publications
(25 citation statements)
references
References 54 publications
0
24
0
1
Order By: Relevance
“…It can be also noted that the proposed method outperforms other deep models, such as GoogLeNet [7] which obtains 97.10%, VGG-VD16-1 st -FC+Aug [23] which obtains 96.88%, SPP-net+MKL [29] which obtains 96.38%, and MCNN [21] which obtains 96.66%. For Brazilian Coffee Scene dataset, the proposed method also obtains 91.24% outperforms 88.46% which is obtained by LQPCANet [37], 85.36% by VGG16 [22], 89.79% by ConvNet [38], 90.94% by CaffeNet [7] and 91.13% by D-DSML-CaffeNet [9]. For Google dataset, the proposed method obtains 92.04% which is better than 82.81% by TF-CNN [39], 89.88% by RDSG-CNN [39], 87.68% by Fine-tuned CaffeNet.…”
Section: Comparisons With the Most Recent Methodsmentioning
confidence: 78%
“…It can be also noted that the proposed method outperforms other deep models, such as GoogLeNet [7] which obtains 97.10%, VGG-VD16-1 st -FC+Aug [23] which obtains 96.88%, SPP-net+MKL [29] which obtains 96.38%, and MCNN [21] which obtains 96.66%. For Brazilian Coffee Scene dataset, the proposed method also obtains 91.24% outperforms 88.46% which is obtained by LQPCANet [37], 85.36% by VGG16 [22], 89.79% by ConvNet [38], 90.94% by CaffeNet [7] and 91.13% by D-DSML-CaffeNet [9]. For Google dataset, the proposed method obtains 92.04% which is better than 82.81% by TF-CNN [39], 89.88% by RDSG-CNN [39], 87.68% by Fine-tuned CaffeNet.…”
Section: Comparisons With the Most Recent Methodsmentioning
confidence: 78%
“…Mask-RCNN requires a large amount of annotated data for training to avoid overfitting. To overcome the problem of limited annotated dataset in remote sensing domain, we adopted transfer learning by selected the pre-trained network weights of the resnet50 model, which was successfully trained with the image net dataset [36]. We utilized the pre-trained resnet50 and fine-tuned the network weights to the NWPUVHR dataset.…”
Section: Training Phasementioning
confidence: 99%
“…The experimental performance showed that the proposed algorithm was superior to the classical Reed-Xiaoli [26] and the most advanced representation-based detectors, such as sparse representation-based detectors (SRD) [27] and cooperative representation-based detectors [28]. Wang proposed two architectures to extract the general features of remote scene classification from the pre-trained CNNs [29]. Wang Liwei et al proposed a hyperspectral image classification method by applying transfer learning in deep residual networks [30], and shared the shallow network weight parameters of deep residual networks.…”
Section: Introductionmentioning
confidence: 99%