2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00506
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Diverse Ensemble for Discriminative Co-tracking

Abstract: Ensemble discriminative tracking utilizes a committee of classifiers, to label data samples, which are in turn, used for retraining the tracker to localize the target using the collective knowledge of the committee. Committee members could vary in their features, memory update schemes, or training data, however, it is inevitable to have committee members that excessively agree because of large overlaps in their version space. To remove this redundancy and have an effective ensemble learning, it is critical for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 28 publications
(10 citation statements)
references
References 55 publications
(116 reference statements)
0
10
0
Order By: Relevance
“…To make this training-intensive technology accessible to everyone, recent studies have explored ways to reduce redundancy in ensembling. Meshgi, Oba, and Ishii (2018) have exploited concepts from active learning to reduce training time and redundancy. Rather than using the whole dataset for training, their ensemble method is trained on the most informative samples that maximize learning based on the query by committee algorithm (Seung, Opper, and Sompolinsky 1992).…”
Section: Introductionmentioning
confidence: 99%
“…To make this training-intensive technology accessible to everyone, recent studies have explored ways to reduce redundancy in ensembling. Meshgi, Oba, and Ishii (2018) have exploited concepts from active learning to reduce training time and redundancy. Rather than using the whole dataset for training, their ensemble method is trained on the most informative samples that maximize learning based on the query by committee algorithm (Seung, Opper, and Sompolinsky 1992).…”
Section: Introductionmentioning
confidence: 99%
“…ensembling. Meshgi, Oba, and Ishii (2018) have exploited concepts from active learning to reduce training time and redundancy. Rather than using the whole dataset for training, their ensemble method is trained on the most informative samples that maximize learning based on the query by committee algorithm (Seung, Opper, and Sompolinsky 1992).…”
Section: Introductionmentioning
confidence: 99%
“…Meshgi et al used the active learning method to reduce training time and redundancy. Meshgi did not use the entire data set, but used the most useful data for training [46,47]. In addition to active learning, the input space is decomposed into multiple regions, and the divide and conquer strategy of training a convolutional neural network in each region can also reduce redundancy.…”
Section: Esrs (Ensembles With Shared Representations)mentioning
confidence: 99%