Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p < 0.05) and had an efficient implementation with a run time of 8 minutes and 3 second per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/.
Purpose: Automated delineation of structures and organs is a key step in medical imaging. However, due to the large number and diversity of structures and the large variety of segmentation algorithms, a consensus is lacking as to which automated segmentation method works best for certain applications. Segmentation challenges are a good approach for unbiased evaluation and comparison of segmentation algorithms. Methods: In this work, we describe and present the results of the Head and Neck Auto-Segmentation Challenge 2015, a satellite event at the Medical Image Computing and Computer Assisted Interventions (MICCAI) 2015 conference. Six teams participated in a challenge to segment nine structures in the head and neck region of CT images: brainstem, mandible, chiasm, bilateral optic nerves, bilateral parotid glands, and bilateral submandibular glands. Results: This paper presents the quantitative results of this challenge using multiple established error metrics and a well-defined ranking system. The strengths and weaknesses of the different auto-segmentation approaches are analyzed and discussed. Conclusions: The Head and Neck Auto-Segmentation Challenge 2015 was a good opportunity to assess the current state-of-the-art in segmentation of organs at risk for radiotherapy treatment. Participating teams had the possibility to compare their approaches to other methods under unbiased and standardized circumstances. The results demonstrate a clear tendency toward more general purpose and fewer structure-specific segmentation algorithms.
Objective.Accurate automated segmentation of cartilage should provide rapid reliable outcomes for both epidemiological studies and clinical trials. We aimed to assess the precision and responsiveness of cartilage thickness measured with careful manual segmentation or a novel automated technique.Methods.Agreement of automated segmentation was assessed against 2 manual segmentation datasets: 379 magnetic resonance images manually segmented in-house (training set), and 582 from the Osteoarthritis Initiative with data available at 0, 1, and 2 years (biomarkers set). Agreement of mean thickness was assessed using Bland-Altman plots, and change with pairwise Student t test in the central medial femur (cMF) and tibia regions (cMT). Repeatability was assessed on a set of 19 knees imaged twice on the same day. Responsiveness was assessed using standardized response means (SRM).Results.Agreement of manual versus automated methods was excellent with no meaningful systematic bias (training set: cMF bias 0.1 mm, 95% CI ± 0.35; biomarkers set: bias 0.1 mm ± 0.4). The smallest detectable difference for cMF was 0.13 mm (coefficient of variation 3.1%), and for cMT 0.16 mm(2.65%). Reported change using manual segmentations in the cMF region at 1 year was −0.031 mm (95% CI −0.022, −0.039), p < 10−4, SRM −0.31 (−0.23, −0.38); and at 2 years was −0.071 (−0.058, −0.085), p < 10−4, SRM −0.43 (−0.36, −0.49). Reported change using automated segmentations in the cMF at 1 year was −0.059 (−0.047, −0.071), p < 10−4, SRM −0.41 (−0.34, −0.48); and at 2 years was −0.14 (−0.123, −0.157, p < 10−4, SRM −0.67 (−0.6, −0.72).Conclusion.A novel cartilage segmentation method provides highly accurate and repeatable measures with cartilage thickness measurements comparable to those of careful manual segmentation, but with improved responsiveness.
We present a fully automatic model based system for segmenting the mandible, parotid and submandibular glands, brainstem, optic nerves and the optic chiasm in CT images, which won the MICCAI 2015 Head and Neck Auto Segmentation Grand Challenge. The method is based on Active Appearance Models (AAM) built from manually segmented examples via a cancer imaging archive provided by the challenge organisers. High quality anatomical correspondences for the models are generated using a Minimum Description Length (MDL) Groupwise Image Registration method. A multi start optimisation scheme is used to robustly match the model to new images. The model has been cross validated on the training data to a good degree of accuracy, and successfully segmented all the test data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.