2018
DOI: 10.1007/s10548-018-0670-7
|View full text |Cite
|
Sign up to set email alerts
|

Brain Activity Mapping from MEG Data via a Hierarchical Bayesian Algorithm with Automatic Depth Weighting

Abstract: A recently proposed iterated alternating sequential (IAS) MEG inverse solver algorithm, based on the coupling of a hierarchical Bayesian model with computationally efficient Krylov subspace linear solver, has been shown to perform well for both superficial and deep brain sources. However, a systematic study of its ability to correctly identify active brain regions is still missing. We propose novel statistical protocols to quantify the performance of MEG inverse solvers, focusing in particular on how their acc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
49
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 31 publications
(49 citation statements)
references
References 49 publications
0
49
0
Order By: Relevance
“…We emphasize that the present conditionally Gaussian prior, in its current formulation, is depth, resolution and decomposition invariant. That is, additional physiological or operator based weighting or prior conditioning (Homa et al, 2013;Calvetti et al, 2015Calvetti et al, , 2018 is not necessary in order to balance the depth performance of the MAP estimate. Our interpretation for this is that RAMUS can correct the depth localization inaccuracies that are otherwise found with MAP estimates, as it, via the multiresolution approach, decomposes the source space into a set of a visible and (Table 3).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We emphasize that the present conditionally Gaussian prior, in its current formulation, is depth, resolution and decomposition invariant. That is, additional physiological or operator based weighting or prior conditioning (Homa et al, 2013;Calvetti et al, 2015Calvetti et al, , 2018 is not necessary in order to balance the depth performance of the MAP estimate. Our interpretation for this is that RAMUS can correct the depth localization inaccuracies that are otherwise found with MAP estimates, as it, via the multiresolution approach, decomposes the source space into a set of a visible and (Table 3).…”
Section: Discussionmentioning
confidence: 99%
“…IG has been suggested for depth localization in Calvetti et al (2009), where the IG and G based IAS MAP estimate have been shown to correspond to the minimum support and minimum current estimate (MSE and MCE) (Nagarajan et al, 2006), respectively, while the first step of the iteration concides with the classical minimum norm estimate (MNE) (Hämäläinen et al, 1993). A recent comparison between IAS and other brain activity reconstruction techniques can be found in (Calvetti et al, 2018).…”
Section: Hierarchical Bayesian Modelmentioning
confidence: 99%
“…We presented a hierarchical model for estimation of multiple current dipoles from M/EEG recordings. The new model generalizes previous work on the same topic, with multiple benefits: an improved estimate of the number of active sources; reduced localization error; great stability when the value of the input parameter varies in a wide range, to the extent that we can claim estimates From a Bayesian modeling perspective, our work partly relates to the work in [30,9,5], where hierarchical Bayesian modeling was used in the M/EEG inverse problem, and to that in [6] where the approach introduced in [5] was further explored. However, in these studies the authors used a distributed source model, as opposed to the ECDs model used here, and their primary goals were to (i) show that hierarchical Bayesian modeling could include multiple known regularizers as special cases and (ii) to investigate to what extent the introduction of hyperpriors could reduce the well-known depth bias affecting particularly the classic 2 regularized solutions.…”
Section: Experimental Datamentioning
confidence: 73%
“…While the two minimizers are likely not far apart, the local minimizer is typically sparser, and the convergence to it is faster. 6. Computed examples.…”
mentioning
confidence: 99%
“…In order to avoid that one frame is favored over another, however, it is important that the data are equally sensitive to components in every frame. Fortunately, the sensitivity analysis developed by the authors in [6,10,9], provides naturally such scaling. The proposed sensitivity weights are rooted in the very natural Bayesian principle of exchangeability, stating that no set of non-zero components with a given cardinality should be favored over any other.…”
mentioning
confidence: 99%