2017
DOI: 10.1109/jstars.2016.2634863
|View full text |Cite
|
Sign up to set email alerts
|

Hyperspectral and LiDAR Data Fusion Using Extinction Profiles and Deep Convolutional Neural Network

Abstract: This paper proposes a novel framework for the fusion of hyperspectral and LiDAR-derived rasterized data using extinction profiles (EPs) and deep learning. In order to extract spatial and elevation information from both the sources, EPs that include different attributes (e.g., height, area, volume, diagonal of the bounding box, and standard deviation) are taken into account. Then, the derived features are fused via either feature stacking or graph-based feature fusion. Finally, the fused features are fed to a d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
138
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 191 publications
(138 citation statements)
references
References 53 publications
0
138
0
Order By: Relevance
“…Finally, the averaged spectral vector which actually includes the spatial contextual information was processed by the following deep network. Furthermore, instead of directly exploiting the spatial information within a neighboring window, some different filtering methods (e.g., Gobar filtering [78], [79], attribute filtering [80], extinction filtering [81], and rolling guidance filtering [82]) were introduced to process the original hyperspectral data aiming to extract more effective spatial features. These filter-based works combine the deep learning techniques with other spatial-feature extraction methods and deliver more accurate classification results.…”
Section: Spectral-spatial-feature Networkmentioning
confidence: 99%
“…Finally, the averaged spectral vector which actually includes the spatial contextual information was processed by the following deep network. Furthermore, instead of directly exploiting the spatial information within a neighboring window, some different filtering methods (e.g., Gobar filtering [78], [79], attribute filtering [80], extinction filtering [81], and rolling guidance filtering [82]) were introduced to process the original hyperspectral data aiming to extract more effective spatial features. These filter-based works combine the deep learning techniques with other spatial-feature extraction methods and deliver more accurate classification results.…”
Section: Spectral-spatial-feature Networkmentioning
confidence: 99%
“…SVM classifiers with radial basis functions using extended multi-attribute profiles [13] and extended multi-extinction profiles [14], which were called EMAP-SVMs and EMEP-SVMs for short, were used for comparison. We used same parameters as [12] to ensure the effectiveness of EMAPs.…”
Section: A Data Description and Experimental Settingsmentioning
confidence: 99%
“…Besides we also conducted experiments on standard training and test samples of Houston data set. Detailed information about the number of standard training and test samples can be found in [14]. In this case, basic structures of the model remained the same except that a smaller neighborhood window (11×11) was adopted for both spectral and LiDAR networks.…”
Section: ) Classification Performances After Data Fusionmentioning
confidence: 99%
“…For example, Guan et al [34] used a segmentation technique to isolate tree crowns, and then used a neural network to classify species based on point distribution. In another study, Ghamisi et al [35] applied a 2D CNN to estimate forest attributes from rasterized LiDAR and hyperspectral data. A 2D CNN is designed to scan two-dimensional images, and is only capable of identifying spatial features along two axes.…”
Section: Introductionmentioning
confidence: 99%