2016
DOI: 10.1007/s00779-016-0997-6
|View full text |Cite
|
Sign up to set email alerts
|

Out of sight: a toolkit for tracking occluded human joint positions

Abstract: Real-time identification and tracking of the joint positions of people can be achieved with off-the-shelf sensing technologies such as the Microsoft Kinect, or other camera-based systems with computer vision. However, tracking is constrained by the system's field of view of people. When a person is occluded from the camera view, their position can no longer be followed. Out of Sight addresses the occlusion problem in depth-sensing tracking systems. Our new tracking infrastructure provides human skeleton joint … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
2
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 30 publications
0
8
0
Order By: Relevance
“…The results were provided as CDFs and plots based on the error distance and a few percentile values. Out of Sight [ 59 ] is a toolkit for tracking occluded human joint positions based on Kinect cameras. Some in-room test were run for evaluating different contexts (stationary, stepping, walking, presence of obstacle and oclusion).…”
Section: Diversity Problem Reviewmentioning
confidence: 99%
“…The results were provided as CDFs and plots based on the error distance and a few percentile values. Out of Sight [ 59 ] is a toolkit for tracking occluded human joint positions based on Kinect cameras. Some in-room test were run for evaluating different contexts (stationary, stepping, walking, presence of obstacle and oclusion).…”
Section: Diversity Problem Reviewmentioning
confidence: 99%
“…Also, due to limited resolution of the sensor (640x480 pixels), the algorithm detected both individuals as multiple persons (e.g., when wearing high-contrast colored clothing), as well as groups of people as individuals (e.g., when moving close together), leading to false positives (FP) and false negatives (FN), as indicated in Table 1. The accuracy of the tool can be increased by deploying multiple cameras [22] or reconstructing occluded parts of the scene [28].…”
Section: Discussionmentioning
confidence: 99%
“…However, the use of the multiple vision sensors results in data fusion problems and camera calibration problems [29], [43], [44]. A Kalman filter method [40] was used to address the problem of fusion for multiple RGB-D sensors for a system that tracks a human skeleton and Wu et al [41] proposed a toolkit for tracking occluded human joints that merges the field of view of multiple RGB-D sensors and uses geometric calibration and affine transformation.…”
Section: Users' Evaluationmentioning
confidence: 99%