2015
DOI: 10.1587/transinf.2014edl8176
|View full text |Cite
|
Sign up to set email alerts
|

Robust Superpixel Tracking with Weighted Multiple-Instance Learning

Abstract: SUMMARYThis paper proposes a robust superpixel-based tracker via multiple-instance learning, which exploits the importance of instances and mid-level features captured by superpixels for object tracking. We first present a superpixels-based appearance model, which is able to compute the confidences of the object and background. Most importantly, we introduce the sample importance into multiple-instance learning (MIL) procedure to improve the performance of tracking. The importance for each instance in the posi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…The sequences chosen for evaluating the algorithms contain challenges like occlusion, the change of illumination and large movement of the target. The compared algorithms include IVT [Ross, Lim, Lin et al (2008)], VTD [Kwon and Lee (2010)], MIL [Babenko, Yang and Belongie (2009)], L1 [Bao, Wu, Ling et al (2012)], TLD [Kalal, Mikolajczyk and Matas (2012)], Frag [Adam, Rivlin and Shimshoni (2006)], SPT [Yang, Lu and Yang (2014)], and Cheng [Cheng (2015)], whose code can be obtained from the authors' homepage. Then, for completeness of the analysis, the proposed tracker is performed on the recent dataset VOT2014 [Kristan, Leonardis, Matas et al (2015)], which contains 25 sequences.…”
Section: Experimental Setupsmentioning
confidence: 99%
See 1 more Smart Citation
“…The sequences chosen for evaluating the algorithms contain challenges like occlusion, the change of illumination and large movement of the target. The compared algorithms include IVT [Ross, Lim, Lin et al (2008)], VTD [Kwon and Lee (2010)], MIL [Babenko, Yang and Belongie (2009)], L1 [Bao, Wu, Ling et al (2012)], TLD [Kalal, Mikolajczyk and Matas (2012)], Frag [Adam, Rivlin and Shimshoni (2006)], SPT [Yang, Lu and Yang (2014)], and Cheng [Cheng (2015)], whose code can be obtained from the authors' homepage. Then, for completeness of the analysis, the proposed tracker is performed on the recent dataset VOT2014 [Kristan, Leonardis, Matas et al (2015)], which contains 25 sequences.…”
Section: Experimental Setupsmentioning
confidence: 99%
“…Concerning to estimate the target position, Junqiu employs multiple hypotheses for superpixel matching and projects the matching results onto a displacement confidence map [Wang and Yagi (2014)]. In consideration of recovering the object from the drifting scene, Xu proposes robust superpixel tracking with weighted multipleinstance learning which achieves robust and accurate performance [Cheng (2015)]. Apart from making efforts on the discriminative appearance model, Yuxia proposes a superpixel tracking method via graph-based hybrid discriminative-generative appearance model to deal with occlusion and shape deformation [Wang and Zhao (2015)].…”
Section: Introductionmentioning
confidence: 99%
“…On top of that, DPMs consist of a mixture of components to describe different poses and perspectives of target. Tracking target by object recognition and detection methods has been proven as a promising way by researchers [4], [5]. Differing from previous work, our work attempts to introduce prior knowledge of target at the beginning of tracking, while others focus on updating the target model iteratively by on-line learning methods.…”
Section: Introductionmentioning
confidence: 99%