2018 24th International Conference on Pattern Recognition (ICPR) 2018
DOI: 10.1109/icpr.2018.8546179
|View full text |Cite
|
Sign up to set email alerts
|

Depth Masked Discriminative Correlation Filter

Abstract: Depth information provides a strong cue for occlusion detection and handling, but has been largely omitted in generic object tracking until recently due to lack of suitable benchmark datasets and applications. In this work, we propose a Depth Masked Discriminative Correlation Filter (DM-DCF) which adopts novel depth segmentation based occlusion detection that stops correlation filter updating and depth masking which adaptively adjusts the spatial support for correlation filter. In Princeton RGBD Tracking Bench… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 20 publications
(19 citation statements)
references
References 23 publications
0
19
0
Order By: Relevance
“…In RGB-D tracking, direct extensions of RGB methods by adding the D-channel as an additional input dimension have achieved considerable success. In particular, discriminative correlation filter (DCF) based methods have shown excellent performance on the Princeton RGB-D tracking benchmark [35], confirming the reputation gained on RGB benchmarks [22,23,19,20,6,1]. Furthermore, DCFs are efficient in both learning of the visual target appearance model and in target localization, which are both implemented by FFT, running in near real time on a standard CPU.…”
Section: Introductionmentioning
confidence: 69%
See 1 more Smart Citation
“…In RGB-D tracking, direct extensions of RGB methods by adding the D-channel as an additional input dimension have achieved considerable success. In particular, discriminative correlation filter (DCF) based methods have shown excellent performance on the Princeton RGB-D tracking benchmark [35], confirming the reputation gained on RGB benchmarks [22,23,19,20,6,1]. Furthermore, DCFs are efficient in both learning of the visual target appearance model and in target localization, which are both implemented by FFT, running in near real time on a standard CPU.…”
Section: Introductionmentioning
confidence: 69%
“…The OTR tracker is compared to all trackers available on the PTB leaderboard: ca3dms+toh [26], CSR-rgbd++ [19], 3D-T [3], PT [35], OAPF [31], DM-DCF [20], DS-KCF-Shape [16], DS-KCF [6], DS-KCF-CPP [16], hiob lc2 [36] and we added two recent trackers STC [40] and DLST [1]. Results are reported in Table 1. OTR convincingly sets the new state-of-the-art in terms of both overall ranking and the average success by a large margin compared to the next-best trackers ( Table 1).…”
Section: Performance On Ptb Benchmark [35]mentioning
confidence: 99%
“…Bibi et al [3] represented the target by sparse, part-based 3-D cuboids while adopting particle filter as their motion model. Hannuna et al [14], An et al formulation, Kart et al [20] adopted Gaussian foreground masks on depth images in CSRDCF [32] training. They later extended their work by using a graph cut method with color and depth priors for the foreground mask segmentation [19] and more recently proposed a view-specific DCF using object's 3D structure based masks [21].…”
Section: Related Workmentioning
confidence: 99%
“…However, RGB-based trackers suffer from bad environmental conditions, e.g., low illumination, fast motion, and so on. Some works [24,25,30,31,52,74] try to introduce additional information (e.g., depth and thermal infrared) to improve tracking performance. However, when the tracking target is in a highspeed motion or an environment with a wide dynamic range, these sensors usually cannot provide satisfactory results.…”
Section: Introductionmentioning
confidence: 99%
“…To the best of our knowledge, we are the first to jointly explore RGB and events for object tracking based on their similarities and differences in an end-to-end manner. This work is essentially object tracking with multi-modal data that includes RGB-D tracking [24,25,52,60], RGB-T tracking [27,30,31,69,74], and so on. However, since the output of an event-based camera is an asynchronous stream of events, this makes event-based data fundamentally different from other sensors' data that have been addressed well by multi-model tracking methods.…”
Section: Introductionmentioning
confidence: 99%