2016
DOI: 10.1109/lgrs.2016.2586109
|View full text |Cite
|
Sign up to set email alerts
|

Stacked Sparse Autoencoder in PolSAR Data Classification Using Local Spatial Information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
72
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 122 publications
(72 citation statements)
references
References 10 publications
0
72
0
Order By: Relevance
“…With the classification of seven types of land cover in level-1, we mainly focused on the top-2 predictionsȳ ∈ R 2 . We define the label space Y = {1, 2, ..., n} and the ground truth label y ∈ Y, the top-2 predictionsȳ ∈ Y (2) . The subset of Y (2) where the ground truth y is included, is written as Y (2) y .…”
Section: Top-2 Smooth Loss Functionmentioning
confidence: 99%
See 2 more Smart Citations
“…With the classification of seven types of land cover in level-1, we mainly focused on the top-2 predictionsȳ ∈ R 2 . We define the label space Y = {1, 2, ..., n} and the ground truth label y ∈ Y, the top-2 predictionsȳ ∈ Y (2) . The subset of Y (2) where the ground truth y is included, is written as Y (2) y .…”
Section: Top-2 Smooth Loss Functionmentioning
confidence: 99%
“…We define the label space Y = {1, 2, ..., n} and the ground truth label y ∈ Y, the top-2 predictionsȳ ∈ Y (2) . The subset of Y (2) where the ground truth y is included, is written as Y (2) y . The top-2 smooth loss function with temperature can be defined as:…”
Section: Top-2 Smooth Loss Functionmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep features and shallow features were combined, and a fully convolutional classifier was proposed in Yan et al (2018) for better performance. As another form of deep learning, autoencoders have also been used for PolSAR image classification (Hou et al, 2017;Zhang et al, 2016;De et al, 2018b). Some scholars sought higher accuracy by developing more powerful network architectures or better combination of hyperparameters (Wang et al, 2018a;Zhou et al, 2017;De et al, 2018a).…”
Section: Introductionmentioning
confidence: 99%
“…The deep learning can be realized by deep neural networks that has multiple hidden layers and nonlinear active function. This technique has been successfully used in the field of image classification (Krizhevsky, Sutskever, & Hinton, 2012), object recognition (He, Lau, Liu, Huang, & Yang, 2015;Simonyan & Zisserman, 2015), and remote sensing (Chen, Jiang, Li, Jia, & Ghamisi, 2016;Chen, Zhao, & Jia, 2015;Hou, Kou et al, 2016;Liu, Jiao, Hou, & Yang, 2016;Liu et al, 2017;Zhang, Ma, & Zhang, 2016. There are mainly three kinds of models developed for deep learning, the stacked autoencoder (SAE) (Vincent, Larochelle, Bengio, & Manzagol, 2008), the deep belief network (DBN) (Hinton & Salakhutdinov, 2006) and the convolutional neural networks (CNNs) (Lécun, Bottou, Bengio, & Haffner, 1998).…”
Section: Introductionmentioning
confidence: 99%