2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2022
DOI: 10.1109/cvprw56347.2022.00090
|View full text |Cite
|
Sign up to set email alerts
|

MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
67
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 135 publications
(67 citation statements)
references
References 56 publications
0
67
0
Order By: Relevance
“…6.1. MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction [10] Figure 5, describes the MST++ pipeline: (a) depicts the proposed Multi-stage Spectral-wise Transformer (MST++), which is cascaded by N s Single-stage Spectral-wise Transformers (SSTs). MST++ takes a RGB image as input and reconstructs its HSI counterpart.…”
Section: Methods and Teamsmentioning
confidence: 99%
See 1 more Smart Citation
“…6.1. MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction [10] Figure 5, describes the MST++ pipeline: (a) depicts the proposed Multi-stage Spectral-wise Transformer (MST++), which is cascaded by N s Single-stage Spectral-wise Transformers (SSTs). MST++ takes a RGB image as input and reconstructs its HSI counterpart.…”
Section: Methods and Teamsmentioning
confidence: 99%
“…Early attempts at this task relied on sparse-coding/regression based methods [1,3,36,39,43]. In recent years neural net based methodologies have become significantly more prominent [10,23,32] though not entirely displacing approaches such as sparse-coding [33]. The goal of this challenge is to gauge the state-of-the-art in spectral recovery from natural RGB images and provide a larger-than-every natural hyperspectral image data set to facilitate future development.…”
Section: Introductionmentioning
confidence: 99%
“…We provide quantitative comparisons between our RFormer with seven SOTA methods including two model-based methods (GLCAE [87] and Bicubic+RL [86]), four CNN-based methods (RealSR [88], ESRGAN [20], I-SECRET [1], and Cofe-Net [2]), and one Transformer-based method (MST [82]). The quantitative comparisons on our RF are shown in Table I, the proposed RFormer outperforms other competitors in terms of PSNR and SSIM.…”
Section: B Comparisons With State-of-the-art Methodsmentioning
confidence: 99%
“…Chen et al [77] propose a large model IPT pre-trained on large-scale datasets with a multitask learning scheme. MST [82] presents a spectral-wise Transformer for HSI reconstruction. Although Transformer has achieved impressive results in many tasks, its potential in fundus image restoration remains under-explored.…”
Section: Vision Transformermentioning
confidence: 99%
“…It has good reconstruction results on both synthetic and real datasets. Yuanhao Cai et al (2022) proposed a new Multi stage Spectral wise Transformer (MST++) efficient spectral reconstruction method based on Transformer. The Spectral Wise Multi head Self identification (S-MSA) with spatial sparsity and spectral self similarity is used to form the Spectral Wise Attention Block (SAB).…”
Section: Introductionmentioning
confidence: 99%