2023
DOI: 10.1109/tgrs.2023.3300043
|View full text |Cite
|
Sign up to set email alerts
|

X-Shaped Interactive Autoencoders With Cross-Modality Mutual Learning for Unsupervised Hyperspectral Image Super-Resolution

Jiaxin Li,
Ke Zheng,
Zhi Li
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 43 publications
(6 citation statements)
references
References 67 publications
0
6
0
Order By: Relevance
“…Other notable contributions include Jiang et al's [12] proposed depth neural network based on the spatial-spectral prior network (SSPSR) to fully utilize the spatial and spectral correlation information within hyperspectral images. XINet [13], an X-shaped interactive autoencoder network, addresses limitations in hyperspectral image super-resolution (HSI-SR). By uniting U-Nets and introducing cross-modality mutual learning, it effectively utilizes multimodal information, enhancing spatial-spectral features.…”
Section: Related Workmentioning
confidence: 99%
“…Other notable contributions include Jiang et al's [12] proposed depth neural network based on the spatial-spectral prior network (SSPSR) to fully utilize the spatial and spectral correlation information within hyperspectral images. XINet [13], an X-shaped interactive autoencoder network, addresses limitations in hyperspectral image super-resolution (HSI-SR). By uniting U-Nets and introducing cross-modality mutual learning, it effectively utilizes multimodal information, enhancing spatial-spectral features.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, research has proved the remarkable power of deep neural networks in modeling complex datasets and mining high-dimensional information, which enables them to extract more representative features compared with conventional methods while exhibiting exceptional feature expression capabilities [32][33][34][35]. The utilization of deep learning techniques has progressively gained prominence for HAD [36].…”
Section: Introductionmentioning
confidence: 99%
“…LiDAR sensors offer rich spatial structural information, and mainstream 3D object detection algorithms typically adopt point cloud-based methods. Due to the unordered and non-structural nature of point cloud data, it is challenging to directly leverage feature extraction networks, as in the case of images [24][25][26][27], to obtain multiscale features. Existing approaches address this issue by voxelization [7][8][9]12,28,29] or Bird's Eye View (BEV) projection [30][31][32] of the raw point cloud, followed by utilizing 3D convolutional neural networks to extract various spatial features.…”
Section: Introductionmentioning
confidence: 99%