2016
DOI: 10.1007/978-3-319-46454-1_29
|View full text |Cite
|
Sign up to set email alerts
|

Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking

Abstract: Abstract. Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual object tracking. The key to their success is the ability to efficiently exploit available negative data by including all shifted versions of a training sample. However, the underlying DCF formulation is restricted to single-resolution feature maps, significantly limiting its potential. In this paper, we go beyond the conventional DCF framework and introduce a novel formulation for training continuous convoluti… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
1,326
1
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,491 publications
(1,328 citation statements)
references
References 40 publications
0
1,326
1
1
Order By: Relevance
“…The Discriminative Scale Space Tracker (DSST) [2] tracker is essentially an extension of KCF that can handle scale changes and outperformed the KCF by a small margin in the VOT2014 challenge. As further axis-aligned trackers, we include ANT [17], L1APG [1], and the best performing tracker from the VOT2016 challenge, the continuous convolution filters (CCOT) from Danelljan et al [3]. We include the LGT [15] as one of the few open source trackers that estimates the object position as box-axis-aligned box-rot DSST [2] CCOT [3] ANT [17] L1APG [1] Figure 6: bmx-trees from DAVIS [12].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The Discriminative Scale Space Tracker (DSST) [2] tracker is essentially an extension of KCF that can handle scale changes and outperformed the KCF by a small margin in the VOT2014 challenge. As further axis-aligned trackers, we include ANT [17], L1APG [1], and the best performing tracker from the VOT2016 challenge, the continuous convolution filters (CCOT) from Danelljan et al [3]. We include the LGT [15] as one of the few open source trackers that estimates the object position as box-axis-aligned box-rot DSST [2] CCOT [3] ANT [17] L1APG [1] Figure 6: bmx-trees from DAVIS [12].…”
Section: Methodsmentioning
confidence: 99%
“…As further axis-aligned trackers, we include ANT [17], L1APG [1], and the best performing tracker from the VOT2016 challenge, the continuous convolution filters (CCOT) from Danelljan et al [3]. We include the LGT [15] as one of the few open source trackers that estimates the object position as box-axis-aligned box-rot DSST [2] CCOT [3] ANT [17] L1APG [1] Figure 6: bmx-trees from DAVIS [12]. On the left, differences between box-no-scale and box-axis-aligned indicate that the object is changing scale and is occluded at frame 18 and around frames 60-70.…”
Section: Methodsmentioning
confidence: 99%
“…1(b). These trackers include MEEM [22], CN [23], KCF [24], HCSVT [25], DSST [26], CNN-SVM [27], and C-COT [28].…”
Section: B Quantitative Comparisonsmentioning
confidence: 99%
“…Similarly, in order to infer the accurate location of the target object, a tracker needs to take changes of several appearance (illumination change, blurriness, occlusion) and dynamic (expanding, shrinking, aspect ratio change) properties into account. Although visual tracking research has achieved remarkable advances in the past decades [21-23, 32, 38-40], and thanks to deep learning especially in the recent years [6,8,29,35,36,41], most methods employ only a subset of these properties, or are too slow to perform in real-time.…”
Section: Introductionmentioning
confidence: 99%
“…41]. Similarly, for correlation filter based trackers, only some of the convolutional features are useful at a time [6,8,26,30]. Therefore, by introducing an adaptive selection of attentional properties, additional dynamic properties can be considered for increased accuracy and robustness while keeping the computational time constant.…”
Section: Introductionmentioning
confidence: 99%