2012
DOI: 10.1007/978-3-642-33718-5_36
|View full text |Cite
|
Sign up to set email alerts
|

Exposure Stacks of Live Scenes with Hand-Held Cameras

Abstract: Abstract. Many computational photography applications require the user to take multiple pictures of the same scene with different camera settings. While this allows to capture more information about the scene than what is possible with a single image, the approach is limited by the requirement that the images be perfectly registered. In a typical scenario the camera is hand-held and is therefore prone to moving during the capture of an image burst, while the scene is likely to contain moving objects. Combining… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
46
0
1

Year Published

2013
2013
2020
2020

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 56 publications
(47 citation statements)
references
References 28 publications
0
46
0
1
Order By: Relevance
“…As mentioned in Section 1, we are only aware of four methods that attempt to address the general case of camera motion and scene changes at the same time [13,30,11,24]. All the fused results were generated using the method by Mertens et al [20], with the exception of Figure 6, which was tonemapped with the method by Mantiuk et al [19] to allow for a fair comparison with the method by Sen et al Figures 3 and 4 show results sensibly better than Zimmer et al and Kang et al respectively.When the reference image is reasonably well-exposed everywhere, our method produces very similar results as Hu et al However, when part of the reference is saturated, as in Figure 5, Hu et al discard valuable information from the shorter exposure (first row, middle image); our method, on the other hand, successfully captures all the available information in the synthesized latent image (second row, middle image).…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…As mentioned in Section 1, we are only aware of four methods that attempt to address the general case of camera motion and scene changes at the same time [13,30,11,24]. All the fused results were generated using the method by Mertens et al [20], with the exception of Figure 6, which was tonemapped with the method by Mantiuk et al [19] to allow for a fair comparison with the method by Sen et al Figures 3 and 4 show results sensibly better than Zimmer et al and Kang et al respectively.When the reference image is reasonably well-exposed everywhere, our method produces very similar results as Hu et al However, when part of the reference is saturated, as in Figure 5, Hu et al discard valuable information from the shorter exposure (first row, middle image); our method, on the other hand, successfully captures all the available information in the synthesized latent image (second row, middle image).…”
Section: Resultsmentioning
confidence: 99%
“…To maximize the applicability of our algorithm, we do not want to limit its scope to RAW (linear) images. Image signal processors (ISP) apply various non-linear transformations to the almost-linear pixel values; these highly non-linear transformations are usually much more sophisticated than simple gamma compressions, and they sometimes even depend on the image content [11,15], making it difficult or even impossible to invert the transformations. Hence, instead of linearizing the input images, we take inspiration from the energy definition by Darabi et al [5], but we account for a generic intensity mapping function τ :…”
Section: Two-picture Synthesis Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…There are also patch-based methods, in which patches including moving objects are excluded [16,17]. To simultaneously deal with the misalignment and moving objects, Zimmer et al [18] proposed an optical flow-based energy minimization method, and Hu et al [19] used nonrigid dense correspondence and color transfer function for this task. Recently, low-rank matrix-based algorithms [20,21] have also been presented, based on the assumption that irradiance maps are linearly related to LDR exposures.…”
Section: Introductionmentioning
confidence: 99%