2011
DOI: 10.1016/j.patrec.2010.02.011
|View full text |Cite
|
Sign up to set email alerts
|

Temporal synchronization of non-overlapping videos using known object motion

Abstract: a b s t r a c tThis paper presents a robust technique for temporally aligning multiple video sequences that have no spatial overlap between their fields of view. It is assumed that (i) a moving target with known trajectory is viewed by all cameras at non-overlapping periods in time, (ii) the target trajectory is estimated with a limited error at a constant sampling rate, and (iii) the sequences are recorded by stationary cameras with constant frame rates and fixed intrinsic and extrinsic parameters. The propos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 34 publications
0
5
0
Order By: Relevance
“…Despite its higher precision, this latter method relies on image-based features extracted from the scene which, differently from ad-hoc devices, might not be always available and are subject to fluctuating levels of accuracy. A rather different approach is proposed by [6], where the knowledge of the trajectory of an object encompassing the view frustum of several cameras is exploited in order to synchronize them. A similar idea is also adopted in [5] where, in a complementary fashion, several mobile sensors are synchronized using simultaneous observation from the same camera.…”
Section: Related Workmentioning
confidence: 99%
“…Despite its higher precision, this latter method relies on image-based features extracted from the scene which, differently from ad-hoc devices, might not be always available and are subject to fluctuating levels of accuracy. A rather different approach is proposed by [6], where the knowledge of the trajectory of an object encompassing the view frustum of several cameras is exploited in order to synchronize them. A similar idea is also adopted in [5] where, in a complementary fashion, several mobile sensors are synchronized using simultaneous observation from the same camera.…”
Section: Related Workmentioning
confidence: 99%
“…Direct alignment uses all pixels in a video frame for synchronization and is suitable for videos containing changes in light; e.g., videos of fireworks. Featurebased alignment depends on features, such as points on moving objects or object trajectories, as a basis for the synchronization algorithm [17][18][19]. Wedge et al [17] presented a coarse-level to fine-level approach to synchronize two video sequences recorded at the same frame rate by stationary cameras with fixed internal parameters.…”
Section: Video Sequence Synchronizationmentioning
confidence: 99%
“…We address this problem by exploiting a novel projective-invariant descriptor based on the cross ratio to obtain the matched trajectory points between the two input videos. So far, numerous video synchronization methods have been presented in the previous works, which are mainly classified into two categories: intensity-based ones [9][10][11][12][13] and feature-based ones [14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29]. The intensity-based methods usually rely on colors, intensities, or intensity gradients to achieve the temporal synchronization of overlapping videos.…”
Section: Introductionmentioning
confidence: 99%
“…Among the feature-based video synchronization methods, the trajectory-based ones are one of the most popular categories [19][20][21][22][23][24][25][26][27][28][29]. These methods generally use some epipolar geometry or homography information among different viewpoints for the purpose of exploiting the matched trajectory points or time pairs between (or among) input videos [21,22,26,28].…”
Section: Introductionmentioning
confidence: 99%