2021
DOI: 10.1007/s11370-021-00358-7
|View full text |Cite
|
Sign up to set email alerts
|

Combining CNN and LSTM for activity of daily living recognition with a 3D matrix skeleton representation

Abstract: In socially assistive robotics, human activity recognition plays a central role when the adaptation of the robot behavior to the human one is required. In this paper, we present an activity recognition approach for activities of daily living based on deep learning and skeleton data. In the literature, ad hoc features extraction/selection algorithms with supervised classification methods have been deployed, reaching an excellent classification performance. Here, we propose a deep learning approach, combining CN… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 25 publications
(12 citation statements)
references
References 30 publications
0
12
0
Order By: Relevance
“…Mutegeki et al [76] used the UCI smartphone sensor dataset to identify human activities using a CNN-LSTM architecture. Ercolano and Rossi [77] proposed a CNN-LSTM-based architecture using video data (the CAD-60 dataset) for HAR. In all the aforementioned works, researchers used combinations of CNNs and LSTM to extract spatial and temporal features.…”
Section: A Deep Learning For Harmentioning
confidence: 99%
“…Mutegeki et al [76] used the UCI smartphone sensor dataset to identify human activities using a CNN-LSTM architecture. Ercolano and Rossi [77] proposed a CNN-LSTM-based architecture using video data (the CAD-60 dataset) for HAR. In all the aforementioned works, researchers used combinations of CNNs and LSTM to extract spatial and temporal features.…”
Section: A Deep Learning For Harmentioning
confidence: 99%
“…It should be noted that since no environmental sensors are used to detect the user position or activity, Find, Look and Approach User are microservices provided by the robot. In the same way, the Activity Recognition microservice is realized by using the robot camera once the user is recognized by the robot (Ercolano and Rossi 2021). Moreover, there are monitoring services whose results are registered in the user profile database as Daily Observations to take into account dynamic events impacting the user's state.…”
Section: Daily Assistive Actions As Microservicesmentioning
confidence: 99%
“…These ethical issues pose challenges for the evaluation of results regarding the perception of the received assistance from the user, and the possibility of correctly evaluating the performance of some developed services in the wild. For example, while activity recognition was tested on laboratory data (Ercolano and Rossi 2021), no video recording, as well as skeleton tracking, was included as means of validation during the experimentation. How to properly evaluate these functionalities in an ecologically relevant environment is an open issue.…”
Section: Limitations and Future Perspectivementioning
confidence: 99%
“…Most methods introduced for HAR are supervised focusing in Fig 1 part (d) and (e), training from lablled data or data with known. They described the relation between skeleton joints in groups activity recognition [27], used deep reinforcement learning and formulate joints as a Markov model to select informative ones [28], continuously learned from skeleton activity using a brain-inspired elastic network [29], and used a hybrid CNN-LSTM to extract spatial and temporal features [30]. The performance of these methods was strongly dependent on training data labeled with ground truth.…”
Section: Human Activity Recognition and Discoverymentioning
confidence: 99%