Purpose: Automated delineation of structures and organs is a key step in medical imaging. However, due to the large number and diversity of structures and the large variety of segmentation algorithms, a consensus is lacking as to which automated segmentation method works best for certain applications. Segmentation challenges are a good approach for unbiased evaluation and comparison of segmentation algorithms. Methods: In this work, we describe and present the results of the Head and Neck Auto-Segmentation Challenge 2015, a satellite event at the Medical Image Computing and Computer Assisted Interventions (MICCAI) 2015 conference. Six teams participated in a challenge to segment nine structures in the head and neck region of CT images: brainstem, mandible, chiasm, bilateral optic nerves, bilateral parotid glands, and bilateral submandibular glands. Results: This paper presents the quantitative results of this challenge using multiple established error metrics and a well-defined ranking system. The strengths and weaknesses of the different auto-segmentation approaches are analyzed and discussed. Conclusions: The Head and Neck Auto-Segmentation Challenge 2015 was a good opportunity to assess the current state-of-the-art in segmentation of organs at risk for radiotherapy treatment. Participating teams had the possibility to compare their approaches to other methods under unbiased and standardized circumstances. The results demonstrate a clear tendency toward more general purpose and fewer structure-specific segmentation algorithms.
Abstract-Spatial regularization is essential in image registration, which is an ill-posed problem. Regularization can help to avoid both physically implausible displacement fields and local minima during optimization. Tikhonov regularization (squared 2-norm) is unable to correctly represent non-smooth displacement fields, that can, for example, occur at sliding interfaces in the thorax and abdomen in image time-series during respiration. In this paper, isotropic Total Variation (TV) regularization is used to enable accurate registration near such interfaces. We further develop the TV-regularization for parametric displacement fields and provide an efficient numerical solution scheme using the Alternating Directions Method of Multipliers (ADMM). The proposed method was successfully applied to four clinical databases which capture breathing motion, including CT lung and MR liver images. It provided accurate registration results for the whole volume. A key strength of our proposed method is that it does not depend on organ masks that are conventionally required by many algorithms to avoid errors at sliding interfaces. Furthermore, our method is robust to parameter selection, allowing the use of the same parameters for all tested databases. The average target registration error (TRE) of our method is superior (10% to 40%) to other techniques in the literature. It provides precise motion quantification and sliding detection with sub-pixel accuracy on the publicly available breathing motion databases (mean TREs of 0.95 mm for DIR 4D CT, 0.96 mm for DIR COPDgene, 0.91 mm for POPI databases).
Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.
In this paper we present a method to jointly optimise the relevance and the diversity of the results in image retrieval. Without considering diversity, image retrieval systems often mainly find a set of very similar results, so called near duplicates, which is often not the desired behaviour. From the user perspective, the ideal result consists of documents which are not only relevant but ideally also diverse. Most approaches addressing diversity in image or information retrieval use a two-step approach where in a first step a set of potentially relevant images is determined and in a second step these images are reranked to be diverse among the first positions. In contrast to these approaches, our method addresses the problem directly and jointly optimises the diversity and the relevance of the images in the retrieval ranking using techniques inspired by dynamic programming algorithms. We quantitatively evaluate our method on the ImageCLEF 2008 photo retrieval data and obtain results which outperform the state of the art. Additionally, we perform a qualitative evaluation on a new product search task and it is observed that the diverse results are more attractive to an average user.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.