2020
DOI: 10.1101/2020.07.28.208579
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Integrating large-scale neuroimaging research datasets: harmonisation of white matter hyperintensity measurements across Whitehall and UK Biobank datasets

Abstract: Large scale neuroimaging datasets present the possibility of providing normative distributions for a wide variety of neuroimaging markers, which would vastly improve the clinical utility of these measures. However, a major challenge is our current poor ability to integrate measures across different large-scale datasets, due to inconsistencies in imaging and non-imaging measures across the different protocols and populations. Here we explore the harmonisation of white matter hyperintensity (WMH) measures across… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
3
2

Relationship

5
0

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…These findings are somewhat unsurprising given that the software tool for WMH measurement (BIANCA) was trained on data collected from the Siemens MRI protocol. When adequate training data are available from the GE protocol, and in older subjects where higher WMH volumes are expected, it will be important to retrain the BIANCA algorithm on both Siemens and GE data and this may improve consistency of WMH IDPs across scanners from the different manufacturers (Bordin et al, 2020).…”
Section: Resultsmentioning
confidence: 99%
“…These findings are somewhat unsurprising given that the software tool for WMH measurement (BIANCA) was trained on data collected from the Siemens MRI protocol. When adequate training data are available from the GE protocol, and in older subjects where higher WMH volumes are expected, it will be important to retrain the BIANCA algorithm on both Siemens and GE data and this may improve consistency of WMH IDPs across scanners from the different manufacturers (Bordin et al, 2020).…”
Section: Resultsmentioning
confidence: 99%
“…Here, WMHs are quantified using the FSL-BIANCA tool, a fully automated, supervised tool for WMH segmentation, based on the k-nearest neighbour algorithm ( Griffanti et al, 2016 ). BIANCA has been optimised on two clinical datasets, applied in healthy older adults ( Griffanti et al, 2018 ), and trained using an openly available training dataset (“Mixed_WH-UKB_FLAIR_T1”, available at 5 , Bordin et al, 2020 ). The training dataset was generated using FLAIR, T1 and manually segmented WMH images from a sub-sample of 24 participants each from the Whitehall II Imaging Sub-study Siemens Verio 3T scanner, the Prisma 3T scanner, and 12 participants from the UK Biobank Study (Siemens Skyra 3T).…”
Section: Methods and Analysismentioning
confidence: 99%
“…The training dataset was generated using FLAIR, T1 and manually segmented WMH images from a sub-sample of 24 participants each from the Whitehall II Imaging Sub-study Siemens Verio 3T scanner, the Prisma 3T scanner, and 12 participants from the UK Biobank Study (Siemens Skyra 3T). Training BIANCA with this dataset reduces the variability in BIANCA performance and generates more consistent WMH measures across images acquired in different cohorts and scanners ( Bordin et al, 2020 ).…”
Section: Methods and Analysismentioning
confidence: 99%
“…different scanners or acquisition protocols). These include reducing the variance in the image-level characteristics ( Bordin et al., 2020 ) (induced by the scanner and acquisition protocol), estimating site effects to correct the measurements derived from the images ( Fortin et al., 2018 ), by improving model generalisability ( Ganin, Ustinova, Ajakan, Germain, Larochelle, Laviolette, Marchand, Lempitsky, 2016 , Tzeng, Hoffman, Darrell, Saenko, 2015 ) (so that it is not affected by differences in intensity distributions or spatial resolution), or a combination of the above. Commonly used techniques to improve model generalisability include data augmentation ( Shorten and Khoshgoftaar, 2019 ), and the use of ensemble networks (with different initialisations ( Li et al., 2018 ) or planes ( Prasoon et al., 2013 )), which have been shown to be resistant to over-fitting ( Krizhevsky, Sutskever, Hinton, 2012 , Simonyan, Zisserman , Kamnitsas, Ledig, Newcombe, Simpson, Kane, Menon, Rueckert, Glocker, 2017 , Winzeck, Mocking, Bezerra, Bouts, McIntosh, Diwan, Garg, Chutinet, Kimberly, Copen, et al., 2019 ), which can occur with more complex models ( Opitz and Maclin, 1999 ).…”
Section: Introductionmentioning
confidence: 99%