2007 IEEE Conference on Computer Vision and Pattern Recognition 2007
DOI: 10.1109/cvpr.2007.383501
|View full text |Cite
|
Sign up to set email alerts
|

Kernel-Based 3D Tracking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2010
2010
2017
2017

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 16 publications
0
12
0
Order By: Relevance
“…These (2D) image-plane trackers often run into problems such as target scale selection, target rotation, occlusion, view-dependence, and correspondence across views [2]. There are few 3D tracking approaches [2,3] that fuse results from independent 2D trackers to obtain 3D trajectories.…”
Section: Tracking With Camera Networkmentioning
confidence: 99%
“…These (2D) image-plane trackers often run into problems such as target scale selection, target rotation, occlusion, view-dependence, and correspondence across views [2]. There are few 3D tracking approaches [2,3] that fuse results from independent 2D trackers to obtain 3D trajectories.…”
Section: Tracking With Camera Networkmentioning
confidence: 99%
“…Tyagi et al [41] extended KBOT to 3D by comc 2016 Information Processing Society of Japan bining evidence from multiple calibrated cameras. This algorithm uses a feature level fusion framework to track the target directly in 3D space, and could be bootstrapped with an automatic re-initialization technique based on clustering 3D point clouds of the foreground targets.…”
Section: Kernel-based Mean-shiftmentioning
confidence: 99%
“…So far, none tracking methods can handle this problem. Though there has been methods for re-initialization based on multiple cameras [41], [112], the re-initialization criterion for single camera is still an open challenge.…”
Section: Conclusion and Feature Directionsmentioning
confidence: 99%
“…The method in [17] fuses the results obtained by existing single-view tracking methods that are applied individually to each of the views. However, the limitations of single view methods in handling occlusions still affect the fused results.…”
Section: Related Workmentioning
confidence: 99%