Procedings of the British Machine Vision Conference 2009 2009
DOI: 10.5244/c.23.122
|View full text |Cite
|
Sign up to set email alerts
|

Multi-view Synchronization of Human Actions and Dynamic Scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0
3

Year Published

2010
2010
2017
2017

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(18 citation statements)
references
References 15 publications
0
15
0
3
Order By: Relevance
“…The results shown in Table 1 Table 1: (C = Cricket, B = Baseball, F = Football) Synchronization Errors (in frames) using our proposed approach (WT), our proposed approach excluding the random sampling around the edge map (WT -EMS), the TE approach of [18], and the Selfsimilarity Matrices (SSM) approach of [8]. Our method performs significantly better for these classes of sports videos.…”
Section: Form Comparison Matrixmentioning
confidence: 97%
See 3 more Smart Citations
“…The results shown in Table 1 Table 1: (C = Cricket, B = Baseball, F = Football) Synchronization Errors (in frames) using our proposed approach (WT), our proposed approach excluding the random sampling around the edge map (WT -EMS), the TE approach of [18], and the Selfsimilarity Matrices (SSM) approach of [8]. Our method performs significantly better for these classes of sports videos.…”
Section: Form Comparison Matrixmentioning
confidence: 97%
“…However, none of them is automatic and most of them do not consider multiple views under occlusions. The self-similarity descriptor (SSM) of [11] presents an unsupervised solution to view-invariant activity recognition and has been used for automatic synchronization of videos [8]. However, the results are promising only for videos with very cleanly extracted activities and in the absence of occlusions.…”
Section: View-invariant Activity Recognition -Problems and Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The relative frame lag is measured either by comparing recorded motion present in the two video streams [41,42], or by actively inserting artificial optical cues into the field of vision of the cameras [43]. This avoids the need for explicit synchronisation and is touted as a means of reducing costs but there are a number of scenarios where the net complexity and cost is increased by the need of the additional post-processing step.…”
Section: Overview Of Synchronisation Techniquesmentioning
confidence: 99%