2007
DOI: 10.1111/j.1467-8659.2007.01080.x
|View full text |Cite
|
Sign up to set email alerts
|

Defocus Magnification

Abstract: b) defocus map (a) input (c) our result with magnified defocus Figure 1: Our technique magnifies defocus given a single image. Our defocus map characterizes blurriness at edges. This enables shallow depth of field effects by magnifying existing defocus. The input photo was taken by a Canon PowerShot A80, a point-and-shoot camera with a sensor size of 7.18 × 5.32 mm, and a 7.8 mm lens at f/2.8. Abstract A blurry background due to shallow depth of field is often desired for photographs such as portraits, but, un… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
151
0
1

Year Published

2012
2012
2019
2019

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 189 publications
(153 citation statements)
references
References 29 publications
1
151
0
1
Order By: Relevance
“…Mahmood and Choi [18] employ 3-D anisotropic diffusion to enhance the input images, and in turn, to obtain an accurate decision map. Staying with the idea of operating on decision maps, Bae et al [21] apply bilateral filtering to this decision map in a related context. However, they do not consider the focus fusion task, but perform a defocus magnification given a single image.…”
Section: Related Workmentioning
confidence: 99%
“…Mahmood and Choi [18] employ 3-D anisotropic diffusion to enhance the input images, and in turn, to obtain an accurate decision map. Staying with the idea of operating on decision maps, Bae et al [21] apply bilateral filtering to this decision map in a related context. However, they do not consider the focus fusion task, but perform a defocus magnification given a single image.…”
Section: Related Workmentioning
confidence: 99%
“…However, the accuracy of such methods is limited and camera calibration is necessary. References include works from Levin [32] and Bae and Durand [1].…”
Section: Single View Reconstructionmentioning
confidence: 99%
“…It has a scale parameter, of which the value can be estimated to describe the blurring scale at each location in given images. Such descriptions of the blurring scales are required for many applications, e.g., image matting [11], moving object detection [5], image enhancement [3], and 3D-shape reconstruction from defocus [6], [10]. Many methods, hence, have been proposed for estimating the blurring scale at each location in the given images.…”
Section: Introductionmentioning
confidence: 99%