2009
DOI: 10.1007/s11263-009-0269-2
|View full text |Cite
|
Sign up to set email alerts
|

Camera Network Calibration and Synchronization from Silhouettes in Archived Video

Abstract: In this paper we present an automatic method for calibrating a network of cameras that works by analyzing only the motion of silhouettes in the multiple video streams. This is particularly useful for automatic reconstruction of a dynamic event using a camera network in a situation where pre-calibration of the cameras is impractical or even impossible. The key contribution of this work is a RANSAC-based algorithm that simultaneously computes the epipolar geometry and synchronization of a pair of cameras only fr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
50
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 49 publications
(50 citation statements)
references
References 31 publications
0
50
0
Order By: Relevance
“…They synchronize views by rank constraints on matrices that capture either the linear combination between points in two views or the brightness measurements of image patches. Sinha and Pollefeys [17] simultaneously calibrate and synchronize cameras in a network, but require the silhouette of a person instead of the trajectory alone.…”
Section: Prior Workmentioning
confidence: 99%
“…They synchronize views by rank constraints on matrices that capture either the linear combination between points in two views or the brightness measurements of image patches. Sinha and Pollefeys [17] simultaneously calibrate and synchronize cameras in a network, but require the silhouette of a person instead of the trajectory alone.…”
Section: Prior Workmentioning
confidence: 99%
“…[20] and [14] assume that an estimate of the projective calibration is available (or can be computed from the static background), and utilise the epipolar constraint and the tri-focal transfer error, respectively, to estimate the synchronisation parameters. [25], on the other hand, can work with fully dynamic scenes, such as a blue screen scenario, and given an accurate foreground segmentation, computes a joint calibration and an index shift estimate.…”
Section: Introductionmentioning
confidence: 99%
“…[26], [20], [14] and [25] are unique in the sense that they discuss scenarios with more than 2 cameras. However, [25] and [14] assume a constant frame rate, and estimate only the temporal offset.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations