2018
DOI: 10.1214/16-ba1030
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Inference and Testing of Group Differences in Brain Networks

Abstract: Abstract. Network data are increasingly collected along with other variables of interest. Our motivation is drawn from neurophysiology studies measuring brain connectivity networks for a sample of individuals along with their membership to a low or high creative reasoning group. It is of paramount importance to develop statistical methods for testing of global and local changes in the structural interconnections among brain regions across groups. We develop a general Bayesian procedure for inference and testin… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
77
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 85 publications
(77 citation statements)
references
References 65 publications
0
77
0
Order By: Relevance
“…Hypothesis testing on a type of graph average has also been proposed (Ginestet et al, 2017). Bayesian nonparametrics approaches for modeling populations of networks allow to test for local edge differences between the groups (Durante and Dunson, 2018), but are computationally feasible only for small networks.…”
Section: Introductionmentioning
confidence: 99%
“…Hypothesis testing on a type of graph average has also been proposed (Ginestet et al, 2017). Bayesian nonparametrics approaches for modeling populations of networks allow to test for local edge differences between the groups (Durante and Dunson, 2018), but are computationally feasible only for small networks.…”
Section: Introductionmentioning
confidence: 99%
“…We applied our model to study communication behavior on real social media data (Instagram and Twitter), as well as for brain connectivity data. We saw that by not relying on a a user-specified threshold, our proposed method offers robustness over the methodology of [10], besides outperforming other baselines like LDA, Ngram language models.…”
Section: Resultsmentioning
confidence: 94%
“…However, much of the work has focused on testing subgraphs within a larger graph (e.g., [7]), or on one-sample tests comparing a single graph to a null model (e.g., [8]). Work focusing on populations of graphs has received considerably less attention and falls into one of two categories: that of [9], which introduces a geometric characterization of the network using the so-called Fréchet mean, and that of [10], who proposed a Bayesian latent-variable model for unweighted graphs. We focus on the latter, which allows us to bring the powerful machinery of probabilistic hierarchical modeling to the table, allowing noisiness and missingness, and providing interpretable confidence scores.…”
Section: Introductionmentioning
confidence: 99%
“…, X q i ), and i is the noise. similar models (Fosdick and Hoff 2015;Durante, Dunson, and Vogelstein 2017;Durante and Dunson 2018;Tang et al 2017;Young and Scheinerman 2007), whereas Assumption 2 is a standard sparsity assumption for high-dimensional regression (Tibshirani 1996;Hastie et al 2015;Hastie, Tibshirani, and Friedman 2008). Under these two assumptions, a schematic of the model (2) can be seen in Fig.…”
Section: A Statistical Model For Multi-scale Network Regressionmentioning
confidence: 99%