Surveillance of postoperative vestibular schwannomas currently relies on manual segmentation and measurement of the tumor by content experts, which is both labor intensive and time consuming. We aimed to develop and validate deep learning models for automatic segmentation of postoperative vestibular schwannomas on gadolinium-enhanced T1-weighted magnetic resonance imaging (GdT1WI) and noncontrast high-resolution T2-weighted magnetic resonance imaging (HRT2WI). Study Design: A supervised machine learning approach using a U-Net model was applied to segment magnetic resonance imaging images into pixels representing vestibular schwannoma and background pixels. Setting: Tertiary care hospital. Patients: Our retrospective data set consisted of 122 GdT1WI and 122 HRT2WI studies in 82 postoperative adult patients with a vestibular schwannoma treated with subtotal surgical resection between September 1, 2007, and April 17, 2018. Forty-nine percent of our cohort was female, the mean age at the time of surgery was 49.8 years, and the median time from surgery to follow-up scan was 2.26 years.
Intervention(s): N/A.Main Outcome Measure(s): Tumor areas were manually segmented in axial images and used as ground truth for training and evaluation of the model. We measured the Dice score of the predicted segmentation results in comparison to manual segmentations from experts to assess the model's accuracy.Results: The GdT1WI model achieved a Dice score of 0.89, and the HRT2WI model achieved a Dice score of 0.85.
Conclusion:We demonstrated that postoperative vestibular schwannomas can be accurately segmented on GdT1WI and HRT2WI without human intervention using deep learning. This artificial intelligence technology has the potential to improve the postoperative surveillance and management of patients with vestibular schwannomas.