2022
DOI: 10.1007/s00530-021-00875-6
|View full text |Cite
|
Sign up to set email alerts
|

Towards a multimodal human activity dataset for healthcare

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(1 citation statement)
references
References 44 publications
0
1
0
Order By: Relevance
“…Complementary to vision data, inertial sensor data (e.g., gyroscopes and accelerometers) provide position and direction information of the wearable device, which may facilitate human activity recognition for egocentric videos. Recently, with the advancement and application of wearable inertial sensors, multi-modal methods, i.e., combining vision data and sensor data to recognize human activities, are of widespread interest, which may promote vision-based methods [16], [17], [18]. Some pioneering work [17] uses LSTM to learn the feature from sensor data and CNNs to learn the feature from vision data, which are fused together to predict wearer's activity.…”
Section: Introductionmentioning
confidence: 99%
“…Complementary to vision data, inertial sensor data (e.g., gyroscopes and accelerometers) provide position and direction information of the wearable device, which may facilitate human activity recognition for egocentric videos. Recently, with the advancement and application of wearable inertial sensors, multi-modal methods, i.e., combining vision data and sensor data to recognize human activities, are of widespread interest, which may promote vision-based methods [16], [17], [18]. Some pioneering work [17] uses LSTM to learn the feature from sensor data and CNNs to learn the feature from vision data, which are fused together to predict wearer's activity.…”
Section: Introductionmentioning
confidence: 99%