Automated Visual Inspection and Machine Vision III 2019
DOI: 10.1117/12.2527599
|View full text |Cite
|
Sign up to set email alerts
|

A simulation framework for the design and evaluation of computational cameras

Abstract: In the emerging field of computational imaging, rapid prototyping of new camera concepts becomes increasingly difficult since the signal processing is intertwined with the physical design of a camera. As novel computational cameras capture information other than the traditional two-dimensional information, ground truth data, which can be used to thoroughly benchmark a new system design, is also hard to acquire. We propose to bridge this gap by using simulation. In this article, we present a raytracing framewor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…Then, a rectangular section I of the center MLI without border pixels, was used for the contrast calculation. With µ describing the average intensity value of the cutout I, the contrast c is calculated via c = I(x, y) • 1 I(x,y)>µ 1 I(x,y)>µ − I(x, y) • 1 I(x,y)≤µ 1 I(x,y)≤µ (16) whereby 1 is an indicator function, which is 1 if the condition is met, and 0 otherwise. In short, the first fraction describes the mean intensity of the bright values, i.e.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Then, a rectangular section I of the center MLI without border pixels, was used for the contrast calculation. With µ describing the average intensity value of the cutout I, the contrast c is calculated via c = I(x, y) • 1 I(x,y)>µ 1 I(x,y)>µ − I(x, y) • 1 I(x,y)≤µ 1 I(x,y)≤µ (16) whereby 1 is an indicator function, which is 1 if the condition is met, and 0 otherwise. In short, the first fraction describes the mean intensity of the bright values, i.e.…”
Section: Methodsmentioning
confidence: 99%
“…The more advanced approaches in Liu et al [14] and Li et al [15] have been described theoretically, but not made publicly available. Only recently, with our previous work [8] as well as Nürnberg et al [16], simulations of plenoptic cameras without considerable simplifications of the lens geometry became openly available. In this work we will use the former for the evaluation due to its ease of use concomitant with the integration in Blender [17].…”
Section: Related Workmentioning
confidence: 99%
“…In particular, they neither account for natural, nor mechanical vignetting of the main lens and the MLs. Here, we use a self-developed ray tracer [17], [10], which we have extended by the following camera model to render a multitude of white images with precisely known ML centers.…”
Section: Camera Model and Reference Datamentioning
confidence: 99%
“…Whereas Blender provides high photorealism, to our knowledge there does not (yet) exist a multispectral extension of the used ray-tracing engine Cycles. For this reason, we use a recently published ray tracer [25] which is capable of directly rendering multispectral light fields and the depth ground truth.…”
Section: B Random Scene Generationmentioning
confidence: 99%