2013
DOI: 10.1007/978-3-642-36279-8_35
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Segment and Track in RGBD

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 22 publications
(17 citation statements)
references
References 23 publications
0
17
0
Order By: Relevance
“…The goal of our algorithm is to collect training examples of objects in unstructured environments from RGBD sequences. Our method is different from (Teichman et al, 2013) in multiple, fundamental aspects. First of all, the authors introduce the use of a large number of features instead of only geometric cues computed to avoid redundant computational complexity.…”
Section: Related Workmentioning
confidence: 94%
See 1 more Smart Citation
“…The goal of our algorithm is to collect training examples of objects in unstructured environments from RGBD sequences. Our method is different from (Teichman et al, 2013) in multiple, fundamental aspects. First of all, the authors introduce the use of a large number of features instead of only geometric cues computed to avoid redundant computational complexity.…”
Section: Related Workmentioning
confidence: 94%
“…They also provided multiple baseline algorithms under two main categories: depth as an additional cue and point cloud tracking. Among the ten proposed variations the The most similar, recent work to ours is that of (Teichman et al, 2013), which uses online learning to segment consecutive RGBD frames. However, this work assumes a single initial segmentation is provided by human labeling.…”
Section: Related Workmentioning
confidence: 99%
“…The individual steps shown in Figure 5: player selection/background subtraction, skeleton tracking and gestures were obtained from the Omek Beckon™ SDK 3.0 Windows Edition-Developer's Guide, included in the installation of the SDK. There are several methodologies and techniques in the literature to perform background subtracting based on ToF cameras [54,55] and based on structured light cameras [56,57]. …”
Section: System Designmentioning
confidence: 99%
“…We evaluate our method using the Stanford Track Collection [20], a dataset of about 13,000 tracks (a total of about 1.3 million frames) extracted from natural, unstaged suburban environments with a dense LIDAR system mounted on a car. Data was recorded while driving and while parked at busy intersections with many people, bicyclists, and cars.…”
Section: A Datasetmentioning
confidence: 99%