2019
DOI: 10.3389/fnins.2019.00679
|View full text |Cite
|
Sign up to set email alerts
|

Intra-Scanner and Inter-Scanner Reproducibility of Automatic White Matter Hyperintensities Quantification

Abstract: Objectives: To evaluate white matter hyperintensities (WMH) quantification reproducibility from multiple aspects of view and examine the effects of scan–rescan procedure, types of scanner, imaging protocols, scanner software upgrade, and automatic segmentation tools on WMH quantification results using magnetic resonance imaging (MRI). Methods: Six post-stroke subjects (4 males; mean age = 62.8, range = 58–72 years) were scanned and rescanned with both 3D T1-weighted, 2D and 3… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 21 publications
(25 citation statements)
references
References 31 publications
0
25
0
Order By: Relevance
“…However, the scope of this evaluation was to explore how differences in manual ratings can impact a supervised segmentation method like BIANCA. To help quantifying the variability caused by manual segmentation we looked at the average agreement (DI) range and found that our between- and within-rater agreement is comparable with the scan-rescan agreement in WMHs assessed in a previous study (inter-scanner range: 0.63–0.65; intra-scanner range: 0.63–0.77) (Guo et al, 2019). This suggests that the impact of the rater on the final segmentation is comparable to the effect of repeating the acquisition using the same settings.…”
Section: Discussionmentioning
confidence: 78%
See 4 more Smart Citations
“…However, the scope of this evaluation was to explore how differences in manual ratings can impact a supervised segmentation method like BIANCA. To help quantifying the variability caused by manual segmentation we looked at the average agreement (DI) range and found that our between- and within-rater agreement is comparable with the scan-rescan agreement in WMHs assessed in a previous study (inter-scanner range: 0.63–0.65; intra-scanner range: 0.63–0.77) (Guo et al, 2019). This suggests that the impact of the rater on the final segmentation is comparable to the effect of repeating the acquisition using the same settings.…”
Section: Discussionmentioning
confidence: 78%
“…We then identified optimal pre-processing and analysis strategies to reduce non-biological variability across datasets, while retaining or taking into account (modelling) the biological variability. Effect of rater : in the training phase, BIANCA requires manually delineated WMH masks, which are known to suffer from inter- and intra-rater variability (Guo et al, 2019). We wanted to assess whether BIANCA trained with different manual masks (either multiple annotations by different raters or repeated annotations by the same rater) generates WMH segmentations that are more or less variable than the manual annotations among themselves.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations