2014 IEEE International Conference on Image Processing (ICIP) 2014
DOI: 10.1109/icip.2014.7026039
|View full text |Cite
|
Sign up to set email alerts
|

Classification of hyperspectral image based on deep belief networks

Abstract: Generally, dimensionality reduction methods, such as Principle Component Analysis (PCA) and Negative Matrix Factorization (NMF), are always applied as the preprocessing part in hyperspectral image classification so as to classify the constituent elements of every pixel in the scene efficiently. The results, however, would suffer the loss of detailed information inevitably. In this paper, deep learning frameworks, restricted Boltzmann machine (RBM) model and its deep structure deep belief networks (DBN), are in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
79
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 174 publications
(79 citation statements)
references
References 10 publications
0
79
0
Order By: Relevance
“…The first attempt can be found in [20], where stacked autoencoder was utilized for high-level feature extraction. Subsequently, Li et al [45] used restricted Boltzmann machine and deep belief network for hyperspectral image feature extraction and pixel classification, by which the information contained in the original data can be well retained. Meanwhile, RNN model has been applied to hyperspectral image classification [46].…”
Section: A Hyperspectral Image Classificationmentioning
confidence: 99%
“…The first attempt can be found in [20], where stacked autoencoder was utilized for high-level feature extraction. Subsequently, Li et al [45] used restricted Boltzmann machine and deep belief network for hyperspectral image feature extraction and pixel classification, by which the information contained in the original data can be well retained. Meanwhile, RNN model has been applied to hyperspectral image classification [46].…”
Section: A Hyperspectral Image Classificationmentioning
confidence: 99%
“…Year Datasets * Settings Multiscale superpixels [1] 2018 IP, PU Random Watershed + SVM [2] 2010 PU Arbitrary Clustering (SVM) [3] 2011 Washington DC, PU Full image Multiresolution segm. [4] 2018 Three in-house datasets Random Region expansion [5] 2018 PU, Sa, KSC Full image DBN (spatial-spectral) [6] 2014 HU Random DBN (spatial-spectral) [7] 2015 IP, PU Random Deep autoencoder [8] 2014 KSC, PU Monte Carlo CNN [9] 2016 PC, PU, Random CNN [10] 2016 IP, PU, KSC Monte Carlo Active learning + DBN [11] 2017 PC, PU, Bo Random DBN (spectral) [12] 2017 IP, PU Random RNN (spectral) [13] 2017 PU, HU, IP Random CNN [14] 2017 IP, Sa, PU Random CNN [15] 2017 IP, Sa, PU Monte Carlo CNN [16] 2018 IP, Sa, PU Monte Carlo CNN [17] 2018 Sa, PU Random * IP-Indian Pines; PU-Pavia University; Sa-Salinas; KSC-Kennedy Space Center; HU-Houston University; PC-Pavia Centre; Bo-Botswana from an input HSI are selected as training and test pixels at random (without overlaps). Unfortunately, different authors report different divisions (in terms of the percentages of pixels taken as training and test ones) which makes fair comparison of new and existing techniques troublesome.…”
Section: Methodsmentioning
confidence: 99%
“…Hence, HSI is being actively used in a range of areas, including precision agriculture, military, surveillance, and more [1]. The state-of-the-art HSI segmentation methods include conventional machine-learning algorithms, which can be further divided into unsupervised [2], [3] and supervised [1], [4], [5] techniques, and modern deeplearning (DL) algorithms [6]- [17] that do not require feature engineering. DL techniques allow for extracting spectral features (e.g., using deep belief networks [DBNs] [11], [12] and recurrent neural networks [RNNs] [13]) or both spectral and spatial pixel's information-mainly using convolutional neural networks (CNNs) [9], [10], [14]- [17] and some DBNs [6], [7].…”
Section: Introductionmentioning
confidence: 99%
“…As a nonlinear feature extraction method, deep learning combines low-level features to form a more abstract high-level category attributes or features to mine the data distribution characteristic, and has powerful feature representation and modeling capabilities towards complex tasks. Generally, learned deep features have better invariance and repeatability, leading to the potential and prospective applications of deep learning in the field of remote sensing [3][4][5][6][7][8]. The involvement of stochastic gradient descent method which has high computational complexity in training deep learning models makes it difficult to improve the speed on paralleled CPUs, thus, making the learning a complex and time-consuming problem.…”
Section: Introductionmentioning
confidence: 99%