2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION'05) - Volume 1 2005
DOI: 10.1109/acvmot.2005.107
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Self-Training of Object Detection Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
442
0
4

Year Published

2006
2006
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 700 publications
(468 citation statements)
references
References 8 publications
0
442
0
4
Order By: Relevance
“…They controls how aggressive the automatic training process is. Similar parameters also exist in other approaches of automatically training scene specific detectors [10,11,12,13,14,15,16]. Our approach has robustness to these parameters within certain range.…”
Section: Conclusion and Discussionmentioning
confidence: 78%
See 2 more Smart Citations
“…They controls how aggressive the automatic training process is. Similar parameters also exist in other approaches of automatically training scene specific detectors [10,11,12,13,14,15,16]. Our approach has robustness to these parameters within certain range.…”
Section: Conclusion and Discussionmentioning
confidence: 78%
“…By calculating the frame difference as 0.5(|I t − I t−50 | + I t − I t+50 |), moving pixels inside a detection window are thresholded and counted. Similar to other self-training [15,13] or co-training [11,10,16,21] frameworks, the confident positive examples are found by thresholding L p (z) > L 0 . The larger the threshold is, the more conservative the strategy of selecting examples is.…”
Section: Confident Positive Examples Of Pedestriansmentioning
confidence: 99%
See 1 more Smart Citation
“…In the self-training approach, a classifier is trained on the labeled data and then used to classify the unlabeled ones. The most confident (now labeled) unlabeled points are added to the training set, together with their predictive labels, and the process is repeated until convergence [5]. Approaches based on co-training [2] assume that the features describing the objects can be divided in two subsets such that each of them is sufficient to train a good classifier, and that the two sets are conditionally independent given the class attribute.…”
Section: Learning From Labeled and Unlabeled Datamentioning
confidence: 99%
“…Our work does not require pixel-level labeled data. In [17], learning requires both pixel-level labeled data and frame-level labeled data. An object detector is initially trained on the pixel-level labeled data, and the learned model is used to estimate labels for the frame-level labeled data.…”
Section: Introductionmentioning
confidence: 99%