2017 IEEE 18th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC) 2017
DOI: 10.1109/spawc.2017.8227671
|View full text |Cite
|
Sign up to set email alerts
|

Rényi divergence based covariance matching pursuit of joint sparse support

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 19 publications
0
9
0
Order By: Relevance
“…We have provided a unifying theoretical platform for deriving different sparse Bayesian learning algorithms for electromagnetic brain imaging using the Majorization-Minimization (MM) framework. [86], Rényi [87], Itakura-Saito (IS) [88], [89] and β divergences [90]- [94] as well as transportation metrics such as the Wasserstein distance between empirical and statistical covariances (e.g., [95]- [98]).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…We have provided a unifying theoretical platform for deriving different sparse Bayesian learning algorithms for electromagnetic brain imaging using the Majorization-Minimization (MM) framework. [86], Rényi [87], Itakura-Saito (IS) [88], [89] and β divergences [90]- [94] as well as transportation metrics such as the Wasserstein distance between empirical and statistical covariances (e.g., [95]- [98]).…”
Section: Discussionmentioning
confidence: 99%
“…It is conceivable that alternative divergence metrics can be used for solving the M/EEG source reconstruction problem in the future by modeling specific neurophysiologically valid aspects of similarity between data and model output. Promising metrics in that respect are information divergences such as Kullback-Leibler (KL) [92], Rényi [90], Itakura-Saito (IS) [93] and β divergences [94]- [97] as well as transportation metrics such as the Wasserstein distance between empirical and statistical covariances (e.g., [98]- [101]).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…While the use of column 3 While the m×n matrices A and B may represent highly underdetermined linear systems (when m n), their m 2 ×n sized Khatri-Rao product A B can become an overdetermined system. In fact, many covariance matching based sparse support recovery algorithms [16], [17], [29] exploit this fact to offer significantly better support reconstruction performance. normalized sensing matrices is commonplace in compressive sensing, we are often interested in showing the restricted isometry of the Khatri-Rao product of randomly constructed input matrices with columns being normalized only in the average sense.…”
Section: Probabilistic K-ric Boundmentioning
confidence: 99%
“…Also, several algorithms from the single sample setting have been generalized to work with multiple samples that include convex programming methods [20], [28], [11], thresholding-based methods [13], [14], Bayesian methods [34] and greedy methods [29], [30]. However, none of the above works addresses the question of tradeoff between m and n when m < k. Initial works considering the m < k regime were [21] and [4], followed by [18] and [23], where it was empirically demonstrated that when multiple samples are available, it is possible to operate in the m < k regime. However, the analysis in [4] is done under two fairly restrictive conditions.…”
Section: Introductionmentioning
confidence: 99%