2022
DOI: 10.1016/j.jvcir.2021.103407
|View full text |Cite
|
Sign up to set email alerts
|

Fall detection using body geometry and human pose estimation in video sequences

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(3 citation statements)
references
References 33 publications
0
3
0
Order By: Relevance
“…The authors of [30] presented a solution, PRF-PIR, which integrates multimodal sensor fusion with passive and interpretable monitoring for long-term monitoring. The system comprised a software-defined radio (SDR) device and a novel passive infrared (PIR) sensor system.…”
Section: Explainable and Interpretable Fall Detectionmentioning
confidence: 99%
“…The authors of [30] presented a solution, PRF-PIR, which integrates multimodal sensor fusion with passive and interpretable monitoring for long-term monitoring. The system comprised a software-defined radio (SDR) device and a novel passive infrared (PIR) sensor system.…”
Section: Explainable and Interpretable Fall Detectionmentioning
confidence: 99%
“…4) Computer vision: [43] produced the dataset by utilizing infrared array sensor temperature data. [44] made use of the Le2i FD, the URFD dataset, and the crossdataset.Using classified movies, a bespoke data set was created [45]. [46] created SimgFall, a signal-based picture dataset.…”
Section: Comparisonmentioning
confidence: 99%
“…Working together in these conditions involves coordination that requires reasoning about how people pose and move (Maeda et al 2014;Mainprice, Rafi, and Berenson 2015;Mörtl et al 2012;Unhelkar et al 2015). Even without direct physical proximity, robots often need to think about a person's physical posture in applications such as sports coaching (Ross, Broz, and Baillie 2019) and elder care, which spans tasks from exercise engagement (Fasola and Matarić 2013) to fall detection (Faria et al 2015;Di et al 2013;Beddiar, Oussalah, and Nini 2022). In motion planning, robots need to further think about their own embodiment and posture to accomplish tasks; learning from human demonstrations via observation (Inamura, Nakamura, and Shimozaki 2002;Inamura, Toshima, and Nakamura 2002) has to map human motion into the robot's analogous configurations.…”
Section: Introductionmentioning
confidence: 99%