2021
DOI: 10.1002/hbm.25636
|View full text |Cite
|
Sign up to set email alerts
|

CEREBRUM‐7T: Fast and Fully Volumetric Brain Segmentation of 7 Tesla MR Volumes

Abstract: Ultra-high-field magnetic resonance imaging (MRI) enables sub-millimetre resolution imaging of the human brain, allowing the study of functional circuits of cortical layers at the meso-scale. An essential step in many functional and structural neuroimaging studies is segmentation, the operation of partitioning the MR images in anatomical structures. Despite recent efforts in brain imaging analysis, the literature lacks in accurate and fast methods for segmenting 7-tesla (7T) brain MRI. We here present CEREBRUM… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
42
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(44 citation statements)
references
References 48 publications
2
42
0
Order By: Relevance
“…CEREBRUM-7T employed a deep encoder/decoder network with three layers and achieved high DSC of 0.9 and 0.86 in WM and BG. 21 However, T 1 -weighted images which have higher gray-WM contrast were employed. In our study, we acquired the T2w images instead because they have higher contrast to noise ratio for imaging PVS.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…CEREBRUM-7T employed a deep encoder/decoder network with three layers and achieved high DSC of 0.9 and 0.86 in WM and BG. 21 However, T 1 -weighted images which have higher gray-WM contrast were employed. In our study, we acquired the T2w images instead because they have higher contrast to noise ratio for imaging PVS.…”
Section: Discussionmentioning
confidence: 99%
“…Although CNN‐based tissue segmentation of 7 T healthy brain MRI images has been reported before, 21,22 our study is the first to evaluate the performance of such an approach for WM segmentation based on T2w images. CEREBRUM‐7T employed a deep encoder/decoder network with three layers and achieved high DSC of 0.9 and 0.86 in WM and BG 21 . However, T 1 ‐weighted images which have higher gray‐WM contrast were employed.…”
Section: Discussionmentioning
confidence: 99%
“…We have chosen this laborious manual segmentation process over a fully automatic one because of the lack of optimized and validated segmentation tools for our very high resolution data. The resulting tissue segmentations are available as a part of our data repository and can be freely inspected or used for validating the results of automatic algorithms (Bazin et al, 2014; Svanera et al, 2021). Note that we used the scoops of interest rather than segmenting all of the brain tissue available within our imaging slabs to focus our efforts on achieving the best segmentation for our regions of interest.…”
Section: Methodsmentioning
confidence: 99%
“…(10,11) As a result, multi-contrast segmentation methods developed for lower field strengths, such as Free-Surfer and Classification using Derivative-based Features (C-DEF), may be degraded in their effectiveness when applied to 7T data. (12) Sequences such as multi-echo GRE and Magnetization Prepared 2 Rapid Acquisition Gradient Echoes (MP2RAGE) offer the opportunity to obtain multiple imaging contrast in the same acquisition, which not only can have similar distortions but also alleviate the need for registration between images. (13) In addition, incorrect skull stripping, especially affecting 7T scans, can also contribute to inaccurate brain segmentation.…”
Section: Introductionmentioning
confidence: 99%
“…A few prior studies have attempted to apply CNNs to segment high-field MRI data. Custom CNNs have been used for cortical lesion segmentation (16) and multiclass whole-brain segmentation (12) on 7T data. For this study, we hypothesized that the capabilities of nnU-Net, perhaps boosted by domain-specific adaptation, may reduce the dependence on auxiliary image contrasts or a priori information by relying instead on contextual information extracted from the training dataset.…”
Section: Introductionmentioning
confidence: 99%