Modern imaging optics are highly complex systems consisting of up to two dozen individual optical elements. This complexity is required in order to compensate for the geometric and chromatic aberrations of a single lens, including geometric distortion, field curvature, wavelength-dependent blur, and color fringing.In this article, we propose a set of computational photography techniques that remove these artifacts, and thus allow for postcapture correction of images captured through uncompensated, simple optics which are lighter and significantly less expensive. Specifically, we estimate per-channel, spatially varying point spread functions, and perform nonblind deconvolution with a novel cross-channel term that is designed to specifically eliminate color fringing.
Without specialized sensor technology or custom, multichip cameras, high dynamic range imaging typically involves time-sequential capture of multiple photographs. The obvious downside to this approach is that it cannot easily be applied to images with moving objects, especially if the motions are complex.In this paper, we take a novel view of HDR capture, which is based on a computational photography approach. We propose to first optically encode both the low dynamic range portion of the scene and highlight information into a low dynamic range image that can be captured with a conventional image sensor. This step is achieved using a cross-screen, or star filter. Second, we decode, in software, both the low dynamic range image and the highlight information. Lastly, these two portions can be combined to form an image of a higher dynamic range than the regular sensor dynamic range.
BackgroundNeurodegenerative diseases (NDs) are characterized by the progressive loss of neurons in the human brain. Although the majority of NDs are sporadic, evidence is accumulating that they have a strong genetic component. Therefore, significant efforts have been made in recent years to not only identify disease-causing genes but also genes that modify the severity of NDs, so-called genetic modifiers. To date there exists no compendium that lists and cross-links genetic modifiers of different NDs.DescriptionIn order to address this need, we present NeuroGeM, the first comprehensive knowledgebase providing integrated information on genetic modifiers of nine different NDs in the model organisms D. melanogaster, C. elegans, and S. cerevisiae. NeuroGeM cross-links curated genetic modifier information from the different NDs and provides details on experimental conditions used for modifier identification, functional annotations, links to homologous proteins and color-coded protein-protein interaction networks to visualize modifier interactions. We demonstrate how this database can be used to generate new understanding through meta-analysis. For instance, we reveal that the Drosophila genes DnaJ-1, thread, Atx2, and mub are generic modifiers that affect multiple if not all NDs.ConclusionAs the first compendium of genetic modifiers, NeuroGeM will assist experimental and computational scientists in their search for the pathophysiological mechanisms underlying NDs. http://chibi.ubc.ca/neurogem.
We present a novel stochastic framework for non-blind deconvolution based on point samples obtained from random walks. Unlike previous methods that must be tailored to specific regularization strategies, the new Stochastic Deconvolution method allows arbitrary priors, including nonconvex and data-dependent regularizers, to be introduced and tested with little effort. Stochastic Deconvolution is straightforward to implement, produces state-of-the-art results and directly leads to a natural boundary condition for image boundaries and saturated pixels.
Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.