2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) 2016
DOI: 10.1109/avss.2016.7738027
|View full text |Cite
|
Sign up to set email alerts
|

Robust discriminative tracking via query-by-bagging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 7 publications
(13 citation statements)
references
References 34 publications
0
13
0
Order By: Relevance
“…Zhou [60] categorizes the diversity generation heuristics into (i) manipulation of data samples based on sampling approaches such as bagging and boosting (e.g. in [39]), (ii) manipulation of input features such as online boosting [19], random subspaces [45], random ferns [42] and random forests [44] or combining using different layers, neurons or interconnection layout of CNNs [21,34], (iii) manipulation of learning parameter, and (iv) manipulation of the error representation. The literature also suggests a fifth category of manipulation of error function which encourages the diversity such as ensemble classifier selection based on Fisher linear discriminant [53].…”
Section: Prior Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Zhou [60] categorizes the diversity generation heuristics into (i) manipulation of data samples based on sampling approaches such as bagging and boosting (e.g. in [39]), (ii) manipulation of input features such as online boosting [19], random subspaces [45], random ferns [42] and random forests [44] or combining using different layers, neurons or interconnection layout of CNNs [21,34], (iii) manipulation of learning parameter, and (iv) manipulation of the error representation. The literature also suggests a fifth category of manipulation of error function which encourages the diversity such as ensemble classifier selection based on Fisher linear discriminant [53].…”
Section: Prior Workmentioning
confidence: 99%
“…Ensemble tracking framework provides effective frameworks to tackle one or more of these challenges. In such frameworks, the self-learning loop is broken, and the labeling process is performed by leveraging a group of classifiers with different views [19,21,44], subsets of training data [39] or memories [38,57]. The main challenge in ensemble methods is how to decorrelate ensemble members and diversify learned models [21].…”
Section: Introductionmentioning
confidence: 99%
“…In supervised learning schemes (e.g., [28]), these thresholds are equal (τ l = τ u ), whereas by employing semisupervised learning for the classifier (e.g., [21], [41]), the trackers allow for some samples to be unlabeled. In addition, trackers based on multi-instance learning (e.g., [18], [34]), bag the samples and apply a label on each bag to handle ambiguity of labeling, and active-learning trackers (e.g., [42]) rely on their oracle for disambiguation. Finally, the target location y t is obtained by comparing the samples classification scores.…”
Section: A Tracking By Detectionmentioning
confidence: 99%
“…An alternative way to tune the weights of the weights of an ensemble is via a Bayesian treatment [6]. Aside from using different features, the members of an ensemble may be constructed from randomized subsets of training data [32] or different time snapshots of a classifier evolving by time [48].…”
Section: Related Workmentioning
confidence: 99%
“…where the weights α (c) t ∈ A t are tuned using boosting [3,16,32] or Bayesian treatment [5]. A larger weight implies that the corresponding classifier of th eensemble is more discriminative, hence more useful.…”
Section: Ensemble Discriminative Trackingmentioning
confidence: 99%