CVPR 2011 2011
DOI: 10.1109/cvpr.2011.5995733
|View full text |Cite
|
Sign up to set email alerts
|

Context tracker: Exploring supporters and distracters in unconstrained environments

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
279
0
2

Year Published

2014
2014
2019
2019

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 454 publications
(282 citation statements)
references
References 32 publications
1
279
0
2
Order By: Relevance
“…Most stateof-the-art methods rely on solely image intensity information [17,31,8,14,35,20,7], while others employ simple color space transformations [29,27,28]. On the contrary, feature representations have been thoroughly investigated in the related fields of object recognition and action recognition [22,21].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Most stateof-the-art methods rely on solely image intensity information [17,31,8,14,35,20,7], while others employ simple color space transformations [29,27,28]. On the contrary, feature representations have been thoroughly investigated in the related fields of object recognition and action recognition [22,21].…”
Section: Related Workmentioning
confidence: 99%
“…We compare our proposed feature representation with 15 state of the art trackers: CT [35], TLD [20], DFT [31], EDFT [8], ASLA [18], L1APG [2], CSK [17], SCM [36], LOT [28], CPF [29], CXT [7], Frag [1], Struck [14], LSHT [15] and LSST [32]. Table 2 shows the comparison of our tracker with the state-of-the-art tracking methods using median DP, OP and CLE.…”
Section: Experiments 2: State Of the Art Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…3. These trackers include Struck [8], SCM [45], TLD [15], ASLA [14], VTD [17], VTS [18], CXT [4], LSK [24], CSK [10], MTT [44] and LOT [27]. Note that all the plots are automatically generated by the code library supported by the benchmark providers.…”
Section: Experiments 1: Cvpr2013 Visual Tracker Benchmarkmentioning
confidence: 99%
“…We compare the proposed tracking methods (CLRST, LRST, LRT and ST) with 14 state-of-the-art visual trackers including visual tracking by decomposition (VTD) (Kwon and Lee 2010), 1 tracker , incremental visual tracking (IVT) method (Ross et al 2008), online multiple instance learning (MIL) (Babenko et al 2009) method, fragments-based (Frag) (Adam et al 2006) tracking method online Adabost boosting (OAB) method (Grabner et al 2006), multi-task tracking (MTT) (Zhang et al 2012d) method, circulant structure tracking (CST) method (Henriques et al 2012), real time compressive tracking (RTCT) method (Zhang et al 2012a), tracking by detection (TLD) method (Kalal et al 2010), context-sensitive tracking (CT) method (Dinh et al 2011), distribution field tracking (DFT) method (Sevilla-Lara and Learned-Miller 2012), sparse collaborative model (SCM) (Zhong et al 2012), and Struck (Hare et al 2011). For fair comparisons, we use the publicly available source or binary codes provided by the authors.…”
Section: Evaluated Algorithmsmentioning
confidence: 99%