It has been shown that employing multiple atlas images improves segmentation accuracy in atlas-based medical image segmentation. Each atlas image is registered to the target image independently and the calculated transformation is applied to the segmentation of the atlas image to obtain a segmented version of the target image. Several independent candidate segmentations result from the process, which must be somehow combined into a single final segmentation. Majority voting is the generally used rule to fuse the segmentations, but more sophisticated methods have also been proposed. In this paper, we show that the use of global weights to ponderate candidate segmentations has a major limitation. As a means to improve segmentation accuracy, we propose the generalized local weighting voting method. Namely, the fusion weights adapt voxel-by-voxel according to a local estimation of segmentation performance. Using digital phantoms and MR images of the human brain, we demonstrate that the performance of each combination technique depends on the gray level contrast characteristics of the segmented region, and that no fusion method yields better results than the others for all the regions. In particular, we show that local combination strategies outperform global methods in segmenting high-contrast structures, while global techniques are less sensitive to noise when contrast between neighboring structures is low. We conclude that, in order to achieve the highest overall segmentation accuracy, the best combination method for each particular structure must be selected.
We present a combined report on the results of three editions of the Cell Tracking Challenge, an ongoing initiative aimed at promoting the development and objective evaluation of cell tracking algorithms. With twenty-one participating algorithms and a data repository consisting of thirteen datasets of various microscopy modalities, the challenge displays today’s state of the art in the field. We analyze the results using performance measures for segmentation and tracking that rank all participating methods. We also analyze the performance of all algorithms in terms of biological measures and their practical usability. Even though some methods score high in all technical aspects, not a single one obtains fully correct solutions. We show that methods that either take prior information into account using learning strategies or analyze cells in a global spatio-temporal video context perform better than other methods under the segmentation and tracking scenarios included in the challenge.
In this article, we present a novel method for the automatic 3D reconstruction of thick tissue blocks from 2D histological sections. The algorithm completes a high-content (multiscale, multifeature) imaging system for simultaneous morphological and molecular analysis of thick tissue samples. This computer-based system integrates image acquisition, annotation, registration, and three-dimensional reconstruction. We present an experimental validation of this tool using both synthetic and real data. In particular, we present the 3D reconstruction of an entire mouse mammary gland and demonstrate the integration of high-resolution molecular data.
Motivation: Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2013 Cell Tracking Challenge. In this article, we present the logistics, datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark.Results: The main contributions of the challenge include the creation of a comprehensive video dataset repository and the definition of objective measures for comparison and ranking of the algorithms. With this benchmark, six algorithms covering a variety of segmentation and tracking paradigms have been compared and ranked based on their performance on both synthetic and real datasets. Given the diversity of the datasets, we do not declare a single winner of the challenge. Instead, we present and discuss the results for each individual dataset separately.Availability and implementation: The challenge Web site (http://www.codesolorzano.com/celltrackingchallenge) provides access to the training and competition datasets, along with the ground truth of the training videos. It also provides access to Windows and Linux executable files of the evaluation software and most of the algorithms that competed in the challenge.Contact: codesolorzano@unav.esSupplementary information: Supplementary data are available at Bioinformatics online.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.