2020
DOI: 10.1101/2020.08.10.243774
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unification of Sparse Bayesian Learning Algorithms for Electromagnetic Brain Imaging with the Majorization Minimization Framework

Abstract: Methods for electro- or magnetoencephalography (EEG/MEG) based brain source imaging (BSI) using sparse Bayesian learning (SBL) have been demonstrated to achieve excellent performance in situations with low numbers of distinct active sources, such as event-related designs. This paper extends the theory and practice of SBL in three important ways. First, we reformulate three existing SBL algorithms under the majorization-minimization (MM) framework. This unification perspective not only provides a useful theoret… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

3
0

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 121 publications
0
1
0
Order By: Relevance
“…The voxel variance hyperparameters are estimated by maximizing a bound on the marginal likelihood p ( Y | α ). Although there are multiple ways to derive update rules for α ( Wipf and Nagarajan, 2009 ), in Champagne we utilize a convex bounding ( Jordan et al, 1999 ) on the logarithm of marginal likelihood (model evidence), which results in fast and convergent update rules ( Hashemi et al, 2020 ). For a detailed derivation of the original Champagne, we refer to our previous paper ( Wipf et al, 2010 ).…”
Section: Methodsmentioning
confidence: 99%
“…The voxel variance hyperparameters are estimated by maximizing a bound on the marginal likelihood p ( Y | α ). Although there are multiple ways to derive update rules for α ( Wipf and Nagarajan, 2009 ), in Champagne we utilize a convex bounding ( Jordan et al, 1999 ) on the logarithm of marginal likelihood (model evidence), which results in fast and convergent update rules ( Hashemi et al, 2020 ). For a detailed derivation of the original Champagne, we refer to our previous paper ( Wipf et al, 2010 ).…”
Section: Methodsmentioning
confidence: 99%