Bioluminescence Tomography attempts to quantify 3-dimensional luminophore distributions from surface measurements of the light distribution. The reconstruction problem is typically severely under-determined due to the number and location of measurements, but in certain cases the molecules or cells of interest form localised clusters, resulting in a distribution of luminophores that is spatially sparse. A Conjugate Gradient-based reconstruction algorithm using Compressive Sensing was designed to take advantage of this sparsity, using a multistage sparsity reduction approach to remove the need to choose sparsity weighting a priori. Numerical simulations were used to examine the effect of noise on reconstruction accuracy. Tomographic bioluminescence measurements of a Caliper XPM-2 Phantom Mouse were acquired and reconstructions from simulation and this experimental data show that Compressive Sensing-based reconstruction is superior to standard reconstruction techniques, particularly in the presence of noise.
A multi-modal optical imaging system for quantitative 3D bioluminescence and functional diffuse imaging is presented, which has no moving parts and uses mirrors to provide multi-view tomographic data for image reconstruction. It is demonstrated that through the use of trans-illuminated spectral near infrared measurements and spectrally constrained tomographic reconstruction, recovered concentrations of absorbing agents can be used as prior knowledge for bioluminescence imaging within the visible spectrum. Additionally, the first use of a recently developed multi-view optical surface capture technique is shown and its application to model-based image reconstruction and free-space light modelling is demonstrated. The benefits of model-based tomographic image recovery as compared to 2D planar imaging are highlighted in a number of scenarios where the internal luminescence source is not visible or is confounding in 2D images. The results presented show that the luminescence tomographic imaging method produces 3D reconstructions of individual light sources within a mouse-sized solid phantom that are accurately localised to within 1.5mm for a range of target locations and depths indicating sensitivity and accurate imaging throughout the phantom volume. Additionally the total reconstructed luminescence source intensity is consistent to within 15% which is a dramatic improvement upon standard bioluminescence imaging. Finally, results from a heterogeneous phantom with an absorbing anomaly are presented demonstrating the use and benefits of a multi-view, spectrally constrained coupled imaging system that provides accurate 3D luminescence images.
A novel method is presented for accurately reconstructing a spatially resolved map of diffuse light flux on a surface using images of the surface and a model of the imaging system. This is achieved by applying a model-based reconstruction algorithm with an existing forward model of light propagation through free space that accounts for the effects of perspective, focus, and imaging geometry. It is shown that flux can be mapped reliably and quantitatively accurately with very low error, <3% with modest signal-to-noise ratio. Simulation shows that the method is generalizable to the case in which mirrors are used in the system and therefore multiple views can be combined in reconstruction. Validation experiments show that physical diffuse phantom surface fluxes can also be reconstructed accurately with variability <3% for a range of object positions, variable states of focus, and different orientations. The method provides a new way of making quantitatively accurate noncontact measurements of the amount of light leaving a diffusive medium, such as a small animal containing fluorescent or bioluminescent markers, that is independent of the imaging system configuration and surface position.
In-depth scene descriptions and question answering tasks have greatly increased the scope of today's definition of scene understanding. While such tasks are in principle open ended, current formulations primarily focus on describing only the current state of the scenes under consideration. In contrast, in this paper, we focus on the future states of the scenes which are also conditioned on actions. We posit this as a question answering task, where an answer has to be given about a future scene state, given observations of the current scene, and a question that includes a hypothetical action. Our solution is a hybrid model which integrates a physics engine into a question answering architecture in order to anticipate future scene states resulting from object-object interactions caused by an action. We demonstrate first results on this challenging new problem and compare to baselines, where we outperform fully data-driven end-to-end learning approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.