Figure 1: Novel view rendering of a soccer scene using only the two input cameras shown to the left and the right.
AbstractWe propose a novel fully automatic method for novel-viewpoint synthesis. Our method robustly handles multicamera setups featuring wide-baselines in an uncontrolled environment. In a first step, robust and sparse point correspondences are found based on an extension of the Daisy features [TLF10]. These correspondences together with back-projection errors are used to drive a novel adaptive coarse to fine reconstruction method, allowing to approximate detailed geometry while avoiding an extreme triangle count. To render the scene from arbitrary viewpoints we use a view-dependent blending of color information in combination with a view-dependent geometry morph. The view-dependent geometry compensates for misalignments caused by calibration errors. We demonstrate that our method works well under arbitrary lighting conditions with as little as two cameras featuring wide-baselines. The footage taken from real sports broadcast events contains fine geometric structures, which result in nice novel-viewpoint renderings despite of the low resolution in the images.