2021
DOI: 10.1101/2021.06.16.448764
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Cost of Untracked Diversity in Brain-Imaging Prediction

Abstract: Brain-imaging research enjoys increasing adoption of supervised machine learning for single-subject disease classification. Yet, the success of these algorithms likely depends on population diversity, including demographic differences and other factors that may be outside of primary scientific interest. Here, we capitalize on propensity scores as a composite confound index to quantify diversity due to major sources of population stratification. We delineate the impact of population heterogeneity on the predict… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(16 citation statements)
references
References 146 publications
(170 reference statements)
0
16
0
Order By: Relevance
“…The neuroimaging community is beginning to recognize and explore the impacts of ethics in machine learning models, with a particular focus on bias in datasets and models (Benkarim et al 2021). Trust is distinct from bias, and it is an equally important yet widely overlooked facet of ethics in neuroimaging models.…”
Section: Ethics In Neuroimaging: the Role Of Bias And Trustmentioning
confidence: 99%
See 1 more Smart Citation
“…The neuroimaging community is beginning to recognize and explore the impacts of ethics in machine learning models, with a particular focus on bias in datasets and models (Benkarim et al 2021). Trust is distinct from bias, and it is an equally important yet widely overlooked facet of ethics in neuroimaging models.…”
Section: Ethics In Neuroimaging: the Role Of Bias And Trustmentioning
confidence: 99%
“…Connectome-based predictive models are at the forefront of this trend (Finn and Rosenberg 2021;Shen et al 2017), showing promising results in understanding general cognition (Beaty et al 2018;Song et al 2021;Dubois et al 2018;Rosenberg et al 2018) and mental health (Du et al 2018;Lynall et al 2010;Nielsen et al 2020). Improvements in accuracy (Cui and Gong 2018;Gan et al 2021;Li et al 2021;Kohoutová et al 2020) and fairness (i.e., lack of bias (Benkarim et al 2021)) of connectome-based models represent an important step in preparing these models for real-world applications. But, accurate and bias-free models are not enough.…”
Section: Introductionmentioning
confidence: 99%
“…Together, this reflects a paradigm shift in human neuroscience research from a focus on the group to a focus on the individual, with important potential applications to clinical practice [21][22][23] .To deliver on this promise, however, these approaches must identify patterns of brain activity that are relevant to the phenotype of interest in a given individual-the patient sitting before their clinician, for example. Previous linear modelling work has relied on the assumptions that (1) a single brain network is associated with a given phenotype, with patterns of activity within that network varying across individuals 10,24,25 ; and (2) larger, more heterogeneous samples will more accurately and reliably capture this single model 26,27 . But although many published models have demonstrated impressive generalizability 6,9,10 , they do not account for brain-phenotype relationships in all individuals 13,14 .…”
mentioning
confidence: 99%
“…But although many published models have demonstrated impressive generalizability 6,9,10 , they do not account for brain-phenotype relationships in all individuals 13,14 . This raises the crucial question of in whom models fail, and why.The existence of structured model failure-some individuals who are better fit by a model than others 14,24,26 -would suggest that one brain-phenotype relationship does not fit all, and that systematic bias may determine who is fit and who is not. This, in turn, may engender imprecise, misleading and in some cases harmful model interpretations.…”
mentioning
confidence: 99%
See 1 more Smart Citation