2019
DOI: 10.5194/isprs-archives-xlii-4-w18-279-2019
|View full text |Cite
|
Sign up to set email alerts
|

CNN-Based Feature-Level Fusion of Very High Resolution Aerial Imagery and Lidar Data

Abstract: Abstract. Land-cover classification of Remote Sensing (RS) data in urban area has always been a challenging task due to the complicated relations between different objects. Recently, fusion of aerial imagery and light detection and ranging (LiDAR) data has obtained a great attention in RS communities. Meanwhile, convolutional neural network (CNN) has proven its power in extracting high-level (deep) descriptors to improve RS data classification. In this paper, a CNN-based feature-level framework is proposed to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…Machine learning-based methods (CNN) of multimodality fusion is an effective medical image analysis method (Mathotaarachchi et al 2017 ; Huang et al 2019 ; Jiang et al 2021 ; Liu et al 2018 ) for multi-class classification (Goenka and Tiwari 2022b , c ). Authors in Daneshtalab et al ( 2019 ) produced an accuracy of 94.2% which is a better performance than that (Qiu et al 2018 ) with an accuracy of 84.0%. Both studies fused information extracted from sMRI and DTI images, but the study with machine learning-based methods (Kang et al 2020 ) performed better.…”
Section: Discussionmentioning
confidence: 81%
See 1 more Smart Citation
“…Machine learning-based methods (CNN) of multimodality fusion is an effective medical image analysis method (Mathotaarachchi et al 2017 ; Huang et al 2019 ; Jiang et al 2021 ; Liu et al 2018 ) for multi-class classification (Goenka and Tiwari 2022b , c ). Authors in Daneshtalab et al ( 2019 ) produced an accuracy of 94.2% which is a better performance than that (Qiu et al 2018 ) with an accuracy of 84.0%. Both studies fused information extracted from sMRI and DTI images, but the study with machine learning-based methods (Kang et al 2020 ) performed better.…”
Section: Discussionmentioning
confidence: 81%
“…A preferred abstraction by most of the researchers was feature-level fusion due to its capability of proving more valid results in the case of compatible features (Daneshtalab et al 2019 ; Agarwal and Desai 2021 ). However, the concatenation of compatible features may produce an extremely feature vector that makes the computational load more difficult (Nachappa et al Apr.…”
Section: Discussionmentioning
confidence: 99%
“…Recent work has also explored the fusion of airborne Li-DAR with overhead imagery for the task of semantic segmentation in an urban area [2]. Typically these approaches render the LiDAR data as 2D images through digital surface models and use a traditional CNN.…”
Section: Multi-modal Data Fusionmentioning
confidence: 99%