2009
DOI: 10.1117/12.805915
|View full text |Cite
|
Sign up to set email alerts
|

Extended depth-of-field using sharpness transport across color channels

Abstract: In this paper we present an approach to extend the Depth-of-Field (DoF) for cell phone miniature camera by concurrently optimizing optical system and post-capture digital processing techniques. Our lens design seeks to increase the longitudinal chromatic aberration in a desired fashion such that, for a given object distance, at least one color plane of the RGB image contains the in-focus scene information. Typically, red is made sharp for objects at infinity, green for intermediate distances, and blue for clos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(19 citation statements)
references
References 16 publications
0
19
0
Order By: Relevance
“…To account for its spatiallyvarying nature, the estimation of defocus blur can generally be cast as a blur map estimation [1]- [4], for which the scale parameter of a priori known defocus point-spread-functions (PSF) model (disc, Gaussian) needs to be specified at each pixel. This has proven hard to solve accurately, and, to this date, most successful solutions for out-of-focus restoration are thus techniques for which the blur kernels estimation is either simplified or circumvented thanks to alterations in the optical design (coded aperture [5], chromatic aberrations [6]) or the presence of correctly focused images of the same scene [7]. This contrasts to camera shake deblurring, for which a broad range of methods [8]- [16] effectively work based on a single image recorded with a conventional camera.…”
Section: Introductionmentioning
confidence: 99%
“…To account for its spatiallyvarying nature, the estimation of defocus blur can generally be cast as a blur map estimation [1]- [4], for which the scale parameter of a priori known defocus point-spread-functions (PSF) model (disc, Gaussian) needs to be specified at each pixel. This has proven hard to solve accurately, and, to this date, most successful solutions for out-of-focus restoration are thus techniques for which the blur kernels estimation is either simplified or circumvented thanks to alterations in the optical design (coded aperture [5], chromatic aberrations [6]) or the presence of correctly focused images of the same scene [7]. This contrasts to camera shake deblurring, for which a broad range of methods [8]- [16] effectively work based on a single image recorded with a conventional camera.…”
Section: Introductionmentioning
confidence: 99%
“…including material or object recognition, color analysis and color constancy, biomedical imaging, remote sensing and astronomy. The adopted strategies for image capture with different spectra include: introducing a filter array [114,116] or prism splitter [117], performing mechanical or electronic control [22], and computational synchronization [36]. It is possible to capture dynamic scenes with high spectrum resolution and spatial resolution simultaneously, as demonstrated by Cao et al [118], as shown in Figure 3.…”
Section: Wavelength Resolutionmentioning
confidence: 99%
“…1) Depth can be recovered from defocus analysis, because the depth of field is closely related to the distance. The typical approaches include introducing coded aperture patterns [46,[143][144][145] or multiple apertures [114], computing from the image pairs captured using different aperture sizes [146][147][148]33]. Levin [149] compares the performances of different aperture codes in depth estimation and gives a mathematical analysis of the results using a geometrical optics model.…”
Section: Extracting Depth or Shapementioning
confidence: 99%
See 1 more Smart Citation
“…This application takes advantage of the longitudinal chromatic aberration from stock aspheric lenses. Various object distances can be resolved from induced axial colour with post-processing and sharpness metrics across colour channels [17]. Each colour corresponds to a particular object distance, and thus a depth map can be produced such that each colour is reconstructed based on the filtering parameters assigned to the respective colour.…”
Section: Introductionmentioning
confidence: 99%