2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00100
|View full text |Cite
|
Sign up to set email alerts
|

Deep Meta Learning for Real-Time Target-Aware Visual Tracking

Abstract: In this paper, we propose a novel on-line visual tracking framework based on the Siamese matching network and meta-learner network, which run at real-time speeds. Conventional deep convolutional feature-based discriminative visual tracking algorithms require continuous re-training of classifiers or correlation filters, which involve solving complex optimization tasks to adapt to the new appearance of a target object. To alleviate this complex process, our proposed algorithm incorporates and utilizes a meta-lea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
73
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 117 publications
(73 citation statements)
references
References 47 publications
0
73
0
Order By: Relevance
“…SCSAtt also achieved 7.31%, 5.99%, 11.69% and 7.31% progress in success score and 10.55%, 4.68%, 13.11% and 6.84% progress in precision score than the MemTrack [40], CREST [61], SRDCF [62], and DsiamM [7] trackers, respectively. Moreover, the proposed method has shown that the performance improvement of 5.34% and 2.56%, 4.28% and 3.03%, 10.99% and 8.66%, and 23.21% and 17.58% in precision and success than the most recent trackers including DSAR-CF [63], MemDTC [45], MLT [65], and UDT [64], respectively. SCSAtt, therefore, constantly outperform on both success and precision scoring metric that demonstrates the effectiveness of our tracker in terms of robustness.…”
Section: B Evaluation On Otb50 Benchmarkmentioning
confidence: 89%
See 2 more Smart Citations
“…SCSAtt also achieved 7.31%, 5.99%, 11.69% and 7.31% progress in success score and 10.55%, 4.68%, 13.11% and 6.84% progress in precision score than the MemTrack [40], CREST [61], SRDCF [62], and DsiamM [7] trackers, respectively. Moreover, the proposed method has shown that the performance improvement of 5.34% and 2.56%, 4.28% and 3.03%, 10.99% and 8.66%, and 23.21% and 17.58% in precision and success than the most recent trackers including DSAR-CF [63], MemDTC [45], MLT [65], and UDT [64], respectively. SCSAtt, therefore, constantly outperform on both success and precision scoring metric that demonstrates the effectiveness of our tracker in terms of robustness.…”
Section: B Evaluation On Otb50 Benchmarkmentioning
confidence: 89%
“…We also compared our proposed tracker with the most recent trackers including DSAR-CF [63], MLT [65], and UDT [64]. The proposed tracker achieved 2.76%, 7.28%, and 12.5% improvement in precision score and 0.31%, 6.66%, and 9.20% improvement in success score compared to DSAR-CF, MLT, and UDT trackers, respectively.…”
Section: A Evaluation On Otb100 Benchmarkmentioning
confidence: 98%
See 1 more Smart Citation
“…In order to achieve better and more powerful feature embedding, RASNet [44] and DaSiamRPN [56] exploit several types attention mechanisms and more effective training strategy with large-scaled training-sets, respectively. DSiam tracker [14] and MLT tracker [4] update the tracking model via a rapid transformation module and a meta-learner network. The merits of deep networks [25,55] is also expected to be leveraged.…”
Section: Related Work 21 Visual Object Trackingmentioning
confidence: 99%
“…Most traditional algorithms for ground-based cloud recognition utilize hand-crafted features, for example, brightness, texture, shape and color, to represent cloud images [12][13][14][15][16][17][18], but they are deficient in modeling complex data distribution. Recently, the convolutional neural network (CNN) [19][20][21][22][23][24] has achieved remarkable performance in various research fields due to the nature of learning highly nonlinear feature transformations. Inspired by this, some cloud-related researchers employ the CNNs to exploit visual features from ground-based cloud images and thrust the performance of ground-based cloud recognition to a new level.…”
Section: Introductionmentioning
confidence: 99%