Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer.
Understanding cell lineage relationships is fundamental to understanding development, and can shed light on disease etiology and progression. We present a method for automated tracking of lineages of proliferative, migrating cells from a sequence of images. The method is applicable to image sequences gathered either in vitro or in vivo. Currently, generating lineage trees from progenitor cells over time is a tedious, manual process, which limits the number of cell measurements that can be practically analyzed. In contrast, the automated method is rapid and easily applied, and produces a wealth of measurements including the precise position, shape, cell-cell contacts, motility and ancestry of each cell in every frame, and accurate timings of critical events, e.g., mitosis and cell death. Furthermore, it automatically produces graphical output that is immediately accessible. Application to clonal development of mouse neural progenitor cells growing in cell culture reveals complex changes in cell cycle rates during neuron and glial production. The method enables a level of quantitative analysis of cell behavior over time that was previously infeasible.
SummaryConfocal microscopy is a three-dimensional (3D) imaging modality, but the specimen thickness that can be imaged is limited by depth-dependent signal attenuation. Both software and hardware methods have been used to correct the attenuation in reconstructed images, but previous methods do not increase the image signal-to-noise ratio (SNR) using conventional specimen preparation and imaging. We present a practical two-view method that increases the overall imaging depth, corrects signal attenuation and improves the SNR. This is achieved by a combination of slightly modified but conventional specimen preparation, image registration, montage synthesis and signal reconstruction methods. The specimen is mounted in a symmetrical manner between a pair of cover slips, rather than between a slide and a cover slip. It is imaged sequentially from both sides to generate two 3D image stacks from perspectives separated by approximately 180 ° with respect to the optical axis. An automated image registration algorithm performs a precise 3D alignment, and a modelbased minimum mean squared algorithm synthesizes a montage, combining the content of both the 3D views. Experiments with images of individual neurones contrasted with a space-filling fluorescent dye in thick brain tissue slices produced precise 3D montages that are corrected for depthdependent signal attenuation. The SNR of the reconstructed image is maximized by the method, and it is significantly higher than in the single views after applying our attenuation model. We also compare our method with simpler two-view reconstruction methods and quantify the SNR improvement. The reconstructed images are a more faithful qualitative visualization of the specimen's structure and are quantitatively more accurate, providing a more rigorous basis for automated image analysis.
SummaryThis paper presents automated and accurate algorithms based on high-order transformation models for registering threedimensional (3D) confocal images of dye-injected neurons. The algorithms improve upon prior methods in several ways, and meet the more stringent image registration needs of applications such as two-view attenuation correction recently developed by us. First, they achieve high accuracy ( ≈ 1.2 voxels, equivalent to 0.4 µ m) by using landmarks, rather than intensity correlations, and by using a high-dimensional affine and quadratic transformation model that accounts for 3D translation, rotation, non-isotropic scaling, modest curvature of field, distortions and mechanical inconsistencies introduced by the imaging system. Second, they use a hierarchy of models and iterative algorithms to eliminate potential instabilities. Third, they incorporate robust statistical methods to achieve accurate registration in the face of inaccurate and missing landmarks. Fourth, they are fully automated, even estimating the initial registration from the extracted landmarks. Finally, they are computationally efficient, taking less than a minute on a 900-MHz Pentium III computer for registering two images roughly 70 MB in size. The registration errors represent a combination of modelling, estimation, discretization and neuron tracing errors. Accurate 3D montaging is described; the algorithms have broader applicability to images of vasculature, and other structures with distinctive point, line and surface landmarks.
Quantitative studies of dynamic behaviors of live neurons are currently limited by the slowness, subjectivity, and tedium of manual analysis of changes in time-lapse image sequences. Challenges to automation include the complexity of the changes of interest, the presence of obfuscating and uninteresting changes due to illumination variations and other imaging artifacts, and the sheer volume of recorded data. This paper describes a highly automated approach that not only detects the interesting changes selectively, but also generates quantitative analyses at multiple levels of detail. Detailed quantitative neuronal morphometry is generated for each frame. Frame-to-frame neuronal changes are measured and labeled as growth, shrinkage, merging or splitting, as would be done by a human expert. Finally, events unfolding over longer durations, such as apoptosis and axonal specification, are automatically inferred from the short-term changes. The proposed method is based on a Bayesian model selection criterion that leverages a set of short-term neurite change models and takes into account additional evidence provided by an illumination-insensitive change mask. An automated neuron tracing algorithm is used to identify the objects of interest in each frame. A novel curve distance measure and weighted bipartite graph matching are used to compare and associate neurites in successive frames. A separate set of multi-image change models drives the identification of longer-term events. The method achieved frame-to-frame change labeling accuracies ranging from 85−100% when tested on 8 representative recordings performed under varied imaging and culturing conditions, and successfully detected all higher-order events of interest. Two sequences were used for training the models and tuning their parameters; the learned parameter settings can be applied to hundreds of similar image sequences, provided imaging and culturing conditions are similar to the training set. The proposed approach is a substantial innovation over manual annotation and change analysis, accomplishing in minutes what it would take an expert hours to complete.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.