2014
DOI: 10.1007/s11554-014-0429-7
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid 3D–2D human tracking in a top view

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
30
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 23 publications
(30 citation statements)
references
References 43 publications
0
30
0
Order By: Relevance
“…In particular, the head's points, selected by Equation (6), are deleted. P C hd (t) − J hd (t − 1) < th hd (6) P C hd (t) represents the points of the head in the current frame, J hd is the head joint calculated from the mean of all the elements in P C hd . th hd is a distance threshold used to find the head points at frame t, based on the position of J hd in the previous frame.…”
Section: Growing Neural Gas Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…In particular, the head's points, selected by Equation (6), are deleted. P C hd (t) − J hd (t − 1) < th hd (6) P C hd (t) represents the points of the head in the current frame, J hd is the head joint calculated from the mean of all the elements in P C hd . th hd is a distance threshold used to find the head points at frame t, based on the position of J hd in the previous frame.…”
Section: Growing Neural Gas Networkmentioning
confidence: 99%
“…Consequently, the network will be parallel to the y-axis, irrespective of how the person is oriented relative to the camera. The triangle highlights the joint J hd calculated with Equation (6). The shoulder joints are selected respectively as the left/right node to J hd with the lowest y coordinate from all the nodes in the network.…”
Section: Growing Neural Gas Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…• Migniot et al [35] address the problem of 3D tracking of human gesture for buying behavior estimation. The top view of the customers, which has not received much attention in human tracking, is exploited in this particular context.…”
Section: In the Context Of Real-time Tracking And Navigation Systemsmentioning
confidence: 99%
“…However, for recovering the 3D state using many views one may attack the problem as a collection of 2D single views (evaluating the pose individually for each view) and integrating those views geometrically or, more accurately, voxelizing the space and evaluating the 3D positioning and tracking [22]. Another approach considers a depth acquisition sensor (such as Kinect-like devices) as in [21]. Our hybrid approach, also taken in [22], uses a 3D particle filter and fully calibrated cameras.…”
Section: Introductionmentioning
confidence: 99%