2001
DOI: 10.1111/1467-8659.00509
|View full text |Cite
|
Sign up to set email alerts
|

On-the-Fly Processing of Generalized Lumigraphs

Abstract: We introduce a flexible and powerful concept for reconstructing arbitrary views from multiple source images on the fly. Our approach is based on a Lumigraph structure with per‐pixel depth values, and generalizes the classical two‐plane parameterized light fields and Lumigraphs. With our technique, it is possible to render arbitrary views of time‐varying, non‐diffuse scenes at interactive frame rates, and it allows using any kind of sensor that yields images with dense depth information. We demonstrate the flex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2005
2005
2019
2019

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(31 citation statements)
references
References 23 publications
0
31
0
Order By: Relevance
“…Since synthesizing a novel view requires only parts of segments in the input images, several systems [23], [26], [27] use the region of interest (ROI) approach to reduce the amount of processed data. We could, for example, partially decode the received JPEG images and only upload the decoded segments to the GPU memory by using the ROI approach.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Since synthesizing a novel view requires only parts of segments in the input images, several systems [23], [26], [27] use the region of interest (ROI) approach to reduce the amount of processed data. We could, for example, partially decode the received JPEG images and only upload the decoded segments to the GPU memory by using the ROI approach.…”
Section: Discussionmentioning
confidence: 99%
“…However, because the cameras cannot be arranged enough densely in practice, we need to estimate the scene geometry (e.g., depth maps) for higher-quality rendering. Schirmacher et al [26] used an array of six FireWire cameras and generated views at 1-2 fps with dense depth maps estimated from the stereo camera pairs, but the rendering quality was limited due to wrong depth reconstruction. Zhang and Chen [27] presented a self-reconfigurable camera array using 48 network cameras, each of which can move sideways and pan using servo motors.…”
Section: Related Workmentioning
confidence: 99%
“…In the hardware based approach the proposed solutions use different type of special hardware devices that allow the acquisition of a uniform sampling of the view direction, like computer controlled gantry with planar camera motion ( [2]), camera arrays ( [6], [7], [8], [9]), or camera systems with additional devices (microlens array [10] [11], attenuating mask [12]). …”
Section: Related Workmentioning
confidence: 99%
“…To solve the overdetermined system in Equation 6 we use a Weighted Singular Value Decomposition (SVD) in order to take advantage of the quality information q (j) u,v related to each sample. In this way, we compute a weighted least square solution of the system:…”
Section: Color Residual Fittingmentioning
confidence: 99%
“…id. extended LFR with per-pixel depth infornintion computed with a classic stereo algorithm [29]. Their approach allows real-time online view synthesis.…”
Section: Introductionmentioning
confidence: 99%