2014 IEEE International Conference on Image Processing (ICIP) 2014
DOI: 10.1109/icip.2014.7025260
|View full text |Cite
|
Sign up to set email alerts
|

Split Bregman algorithms for sparse / joint-sparse and low-rank signal recovery: Application in compressive hyperspectral imaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(26 citation statements)
references
References 23 publications
0
26
0
Order By: Relevance
“…This approach had been very successful in solving multiple penalty optimization problems [10,11]. We repeat synthesis prior problem for the sake of convenience.…”
Section: Proposed Algorithmsmentioning
confidence: 99%
“…This approach had been very successful in solving multiple penalty optimization problems [10,11]. We repeat synthesis prior problem for the sake of convenience.…”
Section: Proposed Algorithmsmentioning
confidence: 99%
“…The 'Bostswana' data is acquired by the NASA EO-l satellite over the Okavango Delta, Botswana in 2001 I. We choose Indian Pine and Botswana data because they 462 are widely used in the researches of HSI compressive sensing, sparse representation and classification [l7] [18] [19] .…”
Section: A Simulation Setupmentioning
confidence: 99%
“…But, Gaussian noise removal from such images is a well studied topic. In fact there are several studies which study the problem of compressive hyper-spectral imaging in the presence of Gaussian noise [9,10]. These studies recover the image by solving the following optimization problem:…”
Section: Literature Reviewmentioning
confidence: 99%
“…To tune the second parameter �, we fix AI to the obtained value and again use the L-curve method to determine 1.. 2. Such a technique, although sub-optimal have showed good results in practice before [10,14,15]. We did not fine tune the parameters and varied them on the log scale (100,10, 1,0.1 etc.).…”
Section: Experimental Ev Alua Nonmentioning
confidence: 99%
See 1 more Smart Citation