Abstract:Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method.
Figure 1: The DeepView architecture. (a) The network takes a sparse set of input images shot from different viewpoints. (b, c) The scene is reconstructed using learned gradient descent, producing a multi-plane image (a series of fronto-parallel, RGBA textured planes). (d) The multi-plane image is suitable for real-time, high-quality rendering of novel viewpoints. The result above uses four input views in a 30cm × 20cm rectangular layout. The novel view was rendered with a virtual camera positioned at the centroid of the four input views. More results, including video and an interactive viewer, at: https://augmentedperception.github.io/deepview/ AbstractWe present a novel approach to view synthesis using multiplane images (MPIs). Building on recent advances in learned gradient descent, our algorithm generates an MPI from a set of sparse camera viewpoints. The resulting method incorporates occlusion reasoning, improving performance on challenging scene features such as object boundaries, lighting reflections, thin structures, and scenes with high depth complexity. We show that our method achieves high-quality, state-of-the-art results on two datasets: the Kalantari light field dataset, and a new camera array dataset, Spaces, which we make publicly available.
Prolonged behavioral challenges can cause animals to switch from active to passive coping strategies to manage effort-expenditure during stress; such normally adaptive behavioral state transitions can become maladaptive in psychiatric disorders such as depression. The underlying neuronal dynamics and brainwide interactions important for passive coping have remained unclear. Here, we develop a paradigm to study these behavioral state transitions at cellular-resolution across the entire vertebrate brain. Using brainwide imaging in zebrafish, we observed that the transition to passive coping is manifested by progressive activation of neurons in the ventral (lateral) habenula. Activation of these ventral-habenula neurons suppressed downstream neurons in the serotonergic raphe nucleus and caused behavioral passivity, whereas inhibition of these neurons prevented passivity. Data-driven recurrent neural network modeling pointed to altered intra-habenula interactions as a contributory mechanism. These results demonstrate ongoing encoding of experience features in the habenula, which guides recruitment of downstream networks and imposes a passive coping behavioral strategy.
The goal of understanding living nervous systems has driven interest in high-speed and large field-of-view volumetric imaging at cellular resolution. Light sheet microscopy approaches have emerged for cellular-resolution functional brain imaging in small organisms such as larval zebrafish, but remain fundamentally limited in speed. Here, we have developed SPED light sheet microscopy, which combines large volumetric field-of-view via an extended depth of field with the optical sectioning of light sheet microscopy, thereby eliminating the need to physically scan detection objectives for volumetric imaging. SPED enables scanning of thousands of volumes-per-second, limited only by camera acquisition rate, through the harnessing of optical mechanisms that normally result in unwanted spherical aberrations. We demonstrate capabilities of SPED microscopy by performing fast sub-cellular resolution imaging of CLARITY mouse brains and cellular-resolution volumetric Ca(2+) imaging of entire zebrafish nervous systems. Together, SPED light sheet methods enable high-speed cellular-resolution volumetric mapping of biological system structure and function.
Whole-brain recordings give us a global perspective of the brain in action. In this study, we describe a method using light field microscopy to record near-whole brain calcium and voltage activity at high speed in behaving adult flies. We first obtained global activity maps for various stimuli and behaviors. Notably, we found that brain activity increased on a global scale when the fly walked but not when it groomed. This global increase with walking was particularly strong in dopamine neurons. Second, we extracted maps of spatially distinct sources of activity as well as their time series using principal component analysis and independent component analysis. The characteristic shapes in the maps matched the anatomy of subneuropil regions and, in some cases, a specific neuron type. Brain structures that responded to light and odor were consistent with previous reports, confirming the new technique’s validity. We also observed previously uncharacterized behavior-related activity as well as patterns of spontaneous voltage activity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.