2016
DOI: 10.1007/978-3-319-48881-3_55
|View full text |Cite
|
Sign up to set email alerts
|

The Thermal Infrared Visual Object Tracking VOT-TIR2016 Challenge Results

Abstract: The Thermal Infrared Visual Object Tracking challenge 2015, VOT-TIR2015, aims at comparing short-term singleobject visual trackers that work on thermal infrared (TIR) sequences and do not apply pre-learned models of object appearance. VOT-TIR2015 is the first benchmark on shortterm tracking in TIR sequences. Results of 24 trackers are presented. For each participating tracker, a short description is provided in the appendix. The VOT-TIR2015 challenge is based on the VOT2013 challenge, but introduces the follow… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0
1

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 49 publications
(39 citation statements)
references
References 50 publications
0
38
0
1
Order By: Relevance
“…In both cases, we leave out around 10% of the videos during training as validation set. We evaluate our TIR tracker on the VOT-TIR2017 dataset [6], which is identical to VOT-TIR2016 dataset [5] as the 2016 edition of this benchmark was far from being saturated. It contains 25 TIR videos of varying image resolution, with an average sequence length of 740 frames adding up to a total of 13,863 frames.…”
Section: A Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…In both cases, we leave out around 10% of the videos during training as validation set. We evaluate our TIR tracker on the VOT-TIR2017 dataset [6], which is identical to VOT-TIR2016 dataset [5] as the 2016 edition of this benchmark was far from being saturated. It contains 25 TIR videos of varying image resolution, with an average sequence length of 740 frames adding up to a total of 13,863 frames.…”
Section: A Datasetsmentioning
confidence: 99%
“…We follow the measures and evaluation protocol proposed by the VOT-TIR2017 benchmark [5]. The two primary measures are accuracy (A) and robustness (R), which have been shown to be highly interpretable and only weakly correlated [64].…”
Section: B Evaluation Measures and Protocolmentioning
confidence: 99%
See 1 more Smart Citation
“…In this subsection, we present the evaluation results on the thermal infrared object tracking benchmark VOT-TIR2016 [17] to illustrate our proposed method achieves the impressive results against most state-of-the-art trackers. We employ accuracy, robustness and EAO score as the evaluation metrics to conduct the experiments.…”
Section: Evaluation On Vot-tir2016mentioning
confidence: 99%
“…Compared trackers. For more comprehensive evaluations, we compare our LMSCO with nine state-of-the-art trackers on VOT-TIR2016 benchmark, including MDNet [16], deepMKCF [34], SHCT [15], DSST [5], NSAMF [27], S-RDCF [6], Staple [14], FCT [17] and GGT2 [35]. All these Fig.…”
Section: Evaluation On Vot-tir2016mentioning
confidence: 99%