In the effort to define genes and specific neuronal circuits that control behavior and plasticity, the capacity for high-precision automated analysis of behavior is essential. We report on comprehensive computer vision software for analysis of swimming locomotion of C. elegans, a simple animal model initially developed to facilitate elaboration of genetic influences on behavior. C. elegans swim test software CeleST tracks swimming of multiple animals, measures 10 novel parameters of swim behavior that can fully report dynamic changes in posture and speed, and generates data in several analysis formats, complete with statistics. Our measures of swim locomotion utilize a deformable model approach and a novel mathematical analysis of curvature maps that enable even irregular patterns and dynamic changes to be scored without need for thresholding or dropping outlier swimmers from study. Operation of CeleST is mostly automated and only requires minimal investigator interventions, such as the selection of videotaped swim trials and choice of data output format. Data can be analyzed from the level of the single animal to populations of thousands. We document how the CeleST program reveals unexpected preferences for specific swim “gaits” in wild-type C. elegans, uncovers previously unknown mutant phenotypes, efficiently tracks changes in aging populations, and distinguishes “graceful” from poor aging. The sensitivity, dynamic range, and comprehensive nature of CeleST measures elevate swim locomotion analysis to a new level of ease, economy, and detail that enables behavioral plasticity resulting from genetic, cellular, or experience manipulation to be analyzed in ways not previously possible.
Abstract. Quantitative analysis of the swimming motions of C. elegans worms are of critical importance for many gene-related studies on aging. However no automated methods are currently in use. We present a novel training-based method that automatically tracks and segments multiple swimming worms, in challenging imaging conditions. The position of each worm is predicted by comparing its latest motion with a set of previous observations, and then adjusted laterally and longitudinally to fit the image. After segmentation, a variety of measures can be used to assess the evolution of swimming patterns over time, allowing a quantitative comparison of worm populations over their lifetime. The complete software is being evaluated for mass processing in biology laboratories.
This paper presents an algorithm to classify pixels in uterine cervix images into two classes, namely normal and abnormal tissues, and simultaneously select relevant features, using group sparsity. Because of the large variations in image appearance due to changes of illumination, specular reflections and other visual noise, the two classes have a strong overlap in feature space, whether features are obtained from color or texture information. Using more features makes the classes more separable and increases the segmentation's quality, but also its complexity. However, the properties of these features have not been well investigated. In most cases, a group of features is selected prior to the segmentation process; features with minor contributions to the results are kept and add to the computational cost. We propose feature selection as a significant improvement in this problem. It provides a robust trade-off between segmentation quality and computational complexity. In this work we formulate the cervigram segmentation problem as a feature-selection-based classification method, and we introduce a regularization-based feature-selection algorithm to leverage both the sparsity and clustering properties of the features used. We implemented our method to automatically segment the biomarker AcetoWhite (AW) regions in a dataset of 200 images of the uterine cervix, for which manual segmentation is available. We compare the performance of several regularization-based feature-selection methods. The experimental results demonstrate that on this dataset, our proposed group-sparsity-based method gives overall better results in terms of sensitivity, specificity and sparsity.
In this paper we present two new methods of segmentation that we developed for nuclei and chromosomic probes -core objects for cytometry medical imaging. Our nucleic segmentation method is mathematically grounded on a novel parametric model of an image histogram, which accounts at the same time for the background noise, the nucleic textures and the nuclei's alterations to the background. We adapted an Expectation-Maximisation algorithm to adjust this model to the histograms of each image and subregion, in a coarse-to-fine approach. The probe segmentation uses a new dome-detection algorithm, insensitive to background and foreground noise, which detects probes of any intensity. We detail our two segmentation methods and our EM algorithm, and discuss the strengths of our techniques compared with state-of-the-art approaches. Both our segmentation methods are unsupervised, automatic, and require no training nor tuning: as a result, they are directly applicable to a wide range of medical images. We have used them as part of a large-scale project for the improvement of prenatal diagnostic of genetic diseases, and tested them on more than 2,100 images with nearly 14,000 nuclei. We report 99.3% accuracy for each of our segmentation methods, with a robustness to different laboratory conditions unreported before.
We present a segmentation method for live cell images, using graph cuts and learning methods. The images used here are particularly challenging because of the shared grey-level distributions of cells and background, which only differ by their textures, and the local imprecision around cell borders. We use the P n Potts model recently presented by Kohli et al. [9]: functions on higher-order cliques of pixels are included into the traditional Potts model, allowing us to account for local texture features, and to find the optimal solution efficiently. We use learning methods to define the potential functions used in the P n Potts model. We present the model and the learning methods we used, and compare our segmentation results with similar work in cytometry. While our method performs similarly, it requires little manual tuning and thus is straightforward to adapt to other images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.