2009
DOI: 10.1117/12.818245
|View full text |Cite
|
Sign up to set email alerts
|

L1 unmixing and its application to hyperspectral image enhancement

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
98
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 112 publications
(98 citation statements)
references
References 16 publications
0
98
0
Order By: Relevance
“…Total variation (TV) has been widely used to explore the spatial piecewise smooth structure for tackling various HSI processing tasks [15,28,29]. It has the ability of preserving local spatial consistency and suppressing observed noise.…”
Section: Tv Regularizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Total variation (TV) has been widely used to explore the spatial piecewise smooth structure for tackling various HSI processing tasks [15,28,29]. It has the ability of preserving local spatial consistency and suppressing observed noise.…”
Section: Tv Regularizationmentioning
confidence: 99%
“…In [2], Akgun et al proposed a novel hyperspectral image acquisition model, with which a projection onto convex sets based super-resolution method was proposed to enhance the resolution of HSIs. Guo et al [15] used the unmixing information and total variation (TV) minimization to produce a higher resolution HSI. By modeling the sparse prior underlying HSIs, a sparse HSI super-resolution model was proposed in [16].…”
Section: Introductionmentioning
confidence: 99%
“…Here each pixel is divided to a specified number of subpixels, wich are rearranged according to the endmembers and its fractional abundance using e.g. : spatial regularization tehniques (Villa et al, 2010), total variation minimization (Guo et al, 2009), or Hopfield neural network optimization (Nguyen et al, 2006). The main advantage of these methods is that they do not require any other information as contained in the hyperspectral image itself.…”
Section: Introductionmentioning
confidence: 99%
“…As a result, these methods suffer from numerical problems for smaller values of p. Thus, an attractive solution is to employ Lipschitz continuous approximations, such as the exponential function, the logarithm function or sigmoid functions, e.g., [20,23,34]. The arctan function is also used in different literature works for sparse regularization, such as approximating the sign function appearing in the derivative of the 1 -norm term in [35], introducing a penalty function for the sparse signal estimation by the maximally-sparse convex approach in [36] or approximating the 0 -norm term through a weighted 1 -norm term in [23].…”
Section: Introductionmentioning
confidence: 99%