2018
DOI: 10.1109/tsp.2018.2876362
|View full text |Cite
|
Sign up to set email alerts
|

Hyperspectral Super-Resolution: A Coupled Tensor Factorization Approach

Abstract: Hyperspectral super-resolution refers to the problem of fusing a hyperspectral image (HSI) and a multispectral image (MSI) to produce a super-resolution image (SRI) that has fine spatial and spectral resolution. State-of-the-art methods approach the problem via low-rank matrix approximations to the matricized HSI and MSI. These methods are effective to some extent, but a number of challenges remain. First, HSIs and MSIs are naturally third-order tensors (data "cubes") and thus matricization is prone to loss of… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
229
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
4
3
1

Relationship

4
4

Authors

Journals

citations
Cited by 189 publications
(230 citation statements)
references
References 55 publications
1
229
0
Order By: Relevance
“…In a lot of applications, some prior knowledge on the latent factors is known-e.g., in image processing, A (n) 's are normally assumed to be nonnegative [6]; in statistical machine learning, sometimes the columns of A (n) are assumed to be constrained within the probability simplex [36,42]; i.e., 1 A (n) = 1 , A (n) ≥ 0.…”
Section: Mttkrpmentioning
confidence: 99%
“…In a lot of applications, some prior knowledge on the latent factors is known-e.g., in image processing, A (n) 's are normally assumed to be nonnegative [6]; in statistical machine learning, sometimes the columns of A (n) are assumed to be constrained within the probability simplex [36,42]; i.e., 1 A (n) = 1 , A (n) ≥ 0.…”
Section: Mttkrpmentioning
confidence: 99%
“…ThereforeŶ = A 2 , B 1 , C 1 reconstructs signal Y . The proof shares insights with that of Theorem 2 [19]. The basic difference lies in the fact that P are full row rank selection matrices instead of full rank dense matrices.…”
Section: Appendix a Proof Of Theoremmentioning
confidence: 70%
“…Lemma 1 [19] LetZ = QZ, where the elements of Z are drawn from an absolutely continuous joint distribution with respect to the Lebesgue measure in C IF and Q ∈ R I ×I is a row selection matrix with full row rank. Then the joint distribution of the elements inZ is absolutely continuous with respect to the Lebesgue measure in C I F It follows that P (1) 1 A, P (2) 3 C are drawn from non-singular absolutely continuous distributions.…”
Section: Appendix a Proof Of Theoremmentioning
confidence: 99%
“…The tensor model is also a low-dimensional model, and it exploits not only the spectral-spatial structure but also the two-dimensional spatial structure. Tensor factorization is shown to exhibit favorable sufficient conditions on exact recovery guarantees [13,14]. Under the low-dimensional model, HSR can be seen as a problem of recovering a low-rank matrix from incomplete observations.…”
Section: Dictionary-based Regressionmentioning
confidence: 99%
“…We identify a majorant for problem (9) by resorting to the following result. By applying Fact 1 to (13), and noting ψ p,τ (X, W ) ≥ ψ p,τ (X, W ) = φ p,τ (X) for any W ∈ S M ++ , we obtain the following majorant…”
Section: Majorization-minimizationmentioning
confidence: 99%