2021
DOI: 10.48550/arxiv.2104.09333
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Camera Calibration and Player Localization in SoccerNet-v2 and Investigation of their Representations for Action Spotting

Abstract: Soccer broadcast video understanding has been drawing a lot of attention in recent years within data scientists and industrial companies. This is mainly due to the lucrative potential unlocked by effective deep learning techniques developed in the field of computer vision. In this work, we focus on the topic of camera calibration and on its current limitations for the scientific community. More precisely, we tackle the absence of a large-scale calibration dataset and of a public calibration network trained on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…However, the dynamic human bodies occupy the most proportion of the image pixels in multiperson scenarios. To handle this obstacle, [50,12,8,50,13] obtain structure cues and estimate camera parameters from the semantics of the scene (e.g., lines of the basketball court). [24,55] estimate the extrinsic camera parameters from the tracked human trajectories in more general multiperson scenes.…”
Section: Extrinsic Camera Calibrationmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the dynamic human bodies occupy the most proportion of the image pixels in multiperson scenarios. To handle this obstacle, [50,12,8,50,13] obtain structure cues and estimate camera parameters from the semantics of the scene (e.g., lines of the basketball court). [24,55] estimate the extrinsic camera parameters from the tracked human trajectories in more general multiperson scenes.…”
Section: Extrinsic Camera Calibrationmentioning
confidence: 99%
“…We then qualitatively and quantitatively evaluate the estimated camera parameters. Since there exists a rigid transformation between the predicted camera parameters and the ground-truth provided in the datasets, we follow [12] to apply rigid alignment to the estimated cameras. We first com- ) since it relies on the dense correspondences between each view.…”
Section: Camera Calibration Evaluationmentioning
confidence: 99%