2004 International Conference on Image Processing, 2004. ICIP '04.
DOI: 10.1109/icip.2004.1421746
|View full text |Cite
|
Sign up to set email alerts
|

Virtual view synthesis through linear processing without geometry

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
5
0

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 5 publications
1
5
0
Order By: Relevance
“…The presented approach is much efficient compared with the previous method. A comparison between the proposed filtering approach and the multilayered rendering and fusion method [4]. In this paper, we show that similar results as the multi-layered rendering and fusion approach can be obtained by directly filtering the captured images.…”
Section: Introductionsupporting
confidence: 58%
See 3 more Smart Citations
“…The presented approach is much efficient compared with the previous method. A comparison between the proposed filtering approach and the multilayered rendering and fusion method [4]. In this paper, we show that similar results as the multi-layered rendering and fusion approach can be obtained by directly filtering the captured images.…”
Section: Introductionsupporting
confidence: 58%
“…2 (b), the multi-layered rendering and fusing method [4] consists of two steps. In the first step, two virtual images, g 1 and g 2 , are generated at the center point of the camera array by using the conventional LFR method based on two depths, z 1 and z 2 , respectively.…”
Section: B Image Formation Model In the Multi-layered Rendering And mentioning
confidence: 99%
See 2 more Smart Citations
“…It is worth refering to the work of Kubota et al [24], [25], [26], [27], where the authors use a set of linear filters for photometric adjustment of observations from a multi-focus system to obtain a fully focused, virtually rendered view. They also explain how visual special effects can be created based on the depth related defocus [26].…”
mentioning
confidence: 99%