2017
DOI: 10.1109/tgrs.2017.2702596
|View full text |Cite
|
Sign up to set email alerts
|

Remote Sensing Scene Classification by Unsupervised Representation Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
106
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 271 publications
(106 citation statements)
references
References 51 publications
0
106
0
Order By: Relevance
“…The compression methods for multisource image/video data are designed from the perspective of image features, which usually mine similarities between image blocks by matching feature points. Moreover, multiscale features for image representation are proposed to extend representation from single payload to multiple payloads, as being proposed in References [35][36][37][38], which is also a way to build relations between multiple data sources. However, computational complexity is high, and the actual correspondence between the selected image block and the coding object is often lacking, which is not conducive to large-area matching.…”
Section: Video Compression Of Multisource Image/video Datamentioning
confidence: 99%
“…The compression methods for multisource image/video data are designed from the perspective of image features, which usually mine similarities between image blocks by matching feature points. Moreover, multiscale features for image representation are proposed to extend representation from single payload to multiple payloads, as being proposed in References [35][36][37][38], which is also a way to build relations between multiple data sources. However, computational complexity is high, and the actual correspondence between the selected image block and the coding object is often lacking, which is not conducive to large-area matching.…”
Section: Video Compression Of Multisource Image/video Datamentioning
confidence: 99%
“…To guarantee comparability between the accuracy of the proposed method and those reported in works presented in [2,3,5,[7][8][9][10][11][12][13], the labeled dataset is divided into training and testing sets using a training-testing ratio of 80-20%, and five-fold cross validation is conducted. That is, the labeled image patches are almost equally divided into five non-overlapping groups randomly, with one group used as the testing set and the remaining four groups used as the training set in each fold.…”
Section: Methodsmentioning
confidence: 99%
“…SPCK++ [2] 77.38 OMP-k [7] 81.70 Saliency + SC [13] 82.72 Multilayer learning [10] 89.10 UFL-SC [8] 90.26 Partlets [3] 91.33 Multipath SC [11] 91.95 Quaternion + Q-OMP [9] 92.29 LGF [5] 95.48 Deconvolution + SPM [12] 95.71 SSF-CNN 88.91 SSF-AlexNet 92.43 Table 4 proves that SSF-CNN outperforms methods including SPM-BoVW, OMP-k, and Saliency + SC, and achieves comparable accuracy with multilayer learning. It should be noted that the CNN Remote Sens.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations