2016
DOI: 10.1109/tgrs.2016.2551720
|View full text |Cite
|
Sign up to set email alerts
|

Target Classification Using the Deep Convolutional Networks for SAR Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
603
0
6

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 1,077 publications
(611 citation statements)
references
References 22 publications
2
603
0
6
Order By: Relevance
“…It is 8.67%, 5.43%, 5.68%, 2.85%, 4.09% better than SVM, sparse representation of monogenic signal (MSRC) [18], tri-task joint sparse representation (TJSR) [48], supervised discriminative dictionary learning and sparse representation (SDDLSR) [8] and joint dynamic sparse representation (JDSR) [49]. In addition, our method can achieve a comparable performance to the state-of-the-art methods based on deep learning (A-ConvNet [24] and DCHUN [50]), shown in Figure 12. In order to verify the advantage of our method for a further step, experiments have been conducted on different sizes of training dataset.…”
Section: The Effectiveness Of Transfer Learningmentioning
confidence: 92%
See 3 more Smart Citations
“…It is 8.67%, 5.43%, 5.68%, 2.85%, 4.09% better than SVM, sparse representation of monogenic signal (MSRC) [18], tri-task joint sparse representation (TJSR) [48], supervised discriminative dictionary learning and sparse representation (SDDLSR) [8] and joint dynamic sparse representation (JDSR) [49]. In addition, our method can achieve a comparable performance to the state-of-the-art methods based on deep learning (A-ConvNet [24] and DCHUN [50]), shown in Figure 12. In order to verify the advantage of our method for a further step, experiments have been conducted on different sizes of training dataset.…”
Section: The Effectiveness Of Transfer Learningmentioning
confidence: 92%
“…Moreover, Wilmanski et al [32] explored different learning algorithms of training CNNs, finding that the AdaDelta technique that can update the various learning rates of hyper-parameters outperformed the other techniques such as stochastic gradient descent (SGD) and AdaGrad. Recently, in [24], a five-layer all-convolutional network was proposed. The authors adopted a drop-out method in a convolution layer and removed the fully connected layer to avoid over-fitting since the limited training data was insufficient to train the deep CNNs.…”
Section: Sar Target Recognition With Cnnsmentioning
confidence: 99%
See 2 more Smart Citations
“…Tang et al [11] used a deep neural network for ship detection. Convolution neural networks have been widely used in remote sensing for scene classification [12], image segmentation [13] and target classification in SAR data [14], and recurrent neural network is utilized for learning land cover change [15]. Stacked Denoising Autoencoder (SDAE), an improved model of SAE, has made outstanding achievements in areas such as speech recognition [16] and other domains.…”
Section: Stacked Denoising Autoencoder Modelmentioning
confidence: 99%