2017
DOI: 10.3390/app7040316
|View full text |Cite
|
Sign up to set email alerts
|

Fall Detection for Elderly from Partially Observed Depth-Map Video Sequences Based on View-Invariant Human Activity Representation

Abstract: This paper presents a new approach for fall detection from partially-observed depth-map video sequences. The proposed approach utilizes the 3D skeletal joint positions obtained from the Microsoft Kinect sensor to build a view-invariant descriptor for human activity representation, called the motion-pose geometric descriptor (MPGD). Furthermore, we have developed a histogram-based representation (HBR) based on the MPGD to construct a length-independent representation of the observed video subsequences. Using th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(22 citation statements)
references
References 25 publications
0
22
0
Order By: Relevance
“…The image shown in Figure 11b corresponds to the located height error of the above joint points. As shown in the figures, the range of all the located error pixels is (3,8).…”
Section: Experimental Results and Validationmentioning
confidence: 99%
See 1 more Smart Citation
“…The image shown in Figure 11b corresponds to the located height error of the above joint points. As shown in the figures, the range of all the located error pixels is (3,8).…”
Section: Experimental Results and Validationmentioning
confidence: 99%
“…Recently, owing to extracted depth information, depth cameras [2,3] are applied for estimating 3D human poses and representing human activity. Kong et al [4] presented a hybrid framework to detect joints automatically based on a depth camera.…”
Section: Introductionmentioning
confidence: 99%
“…In order to discuss the performances more convincingly, two widely-accepted metrics called precision rate P and recall rate R [27] are selected to measure the detection results quantitatively. As is illustrated in Figure 11, an assumption is made that N T is the pixel number of true targets existing in the current frame; N D is the pixel number of targets detected by the tested algorithm, and N C = N T ∩ N D is the pixel number of targets detected correctly.…”
Section: Experimental Results Of In-frame Detectionmentioning
confidence: 99%
“…This has opened the door to the detection of abnormal events in an automatic manner. Fall detection is by far the most commonly faced challenge and the top topic in health environments [15,16], but there exist other challenges like monitoring Parkinson's disease [17] or even recognising emotional states [18,19].…”
Section: Related Workmentioning
confidence: 99%