2009
DOI: 10.1111/j.1467-8659.2009.01416.x
|View full text |Cite
|
Sign up to set email alerts
|

Interactive Pixel‐Accurate Free Viewpoint Rendering from Images with Silhouette Aware Sampling

Abstract: We present an integrated, fully GPU-based processing pipeline to interactively render new views of arbitrary scenes from calibrated but otherwise unstructured input views. In a two-step procedure, our method first generates for each input view a dense proxy of the scene using a new multi-view stereo formulation. Each scene proxy consists of a structured cloud of feature aware particles which automatically have their image space footprints aligned to depth discontinuities of the scene geometry and hence effecti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2011
2011
2017
2017

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 33 publications
0
12
0
Order By: Relevance
“…Using a particle-based passive stereo approach, [Hornung and Kobbelt 2009] developed an interactive system for free viewpoint rendering of a static scene from a collection of pre-processed images. Kinect Fusion builds a voxel model of a static scene in realtime using a single Kinect depth sensor that is moved through space [Izadi et al 2011].…”
Section: Previous Workmentioning
confidence: 99%
“…Using a particle-based passive stereo approach, [Hornung and Kobbelt 2009] developed an interactive system for free viewpoint rendering of a static scene from a collection of pre-processed images. Kinect Fusion builds a voxel model of a static scene in realtime using a single Kinect depth sensor that is moved through space [Izadi et al 2011].…”
Section: Previous Workmentioning
confidence: 99%
“…The point cloud will contain potentially noisy data, as well as duplicate points for regions that are visible in multiple images. The point cloud is filtered and smoothed in a manner similar to [27] to eliminate these issues. Finally Poisson surface reconstruction is applied to the point cloud to form the geometry, G α for the morph stage.…”
Section: Shape and Texture Reconstructionmentioning
confidence: 99%
“…One way of synthesizing textures in novel views or poses would be to use some form of projective texturing and blending, e.g., [Debevec et al 1996;Narayanan et al 1998;Buehler et al 2001;Carranza et al 2003;Cheung 2003;Hornung and Kobbelt 2009]. However, these approaches are known to produce texture ghosting when there are even the slightest of inaccuracies in scene geometry.…”
Section: Related Workmentioning
confidence: 99%