Proceedings of the 15th International Conference on Informatics in Control, Automation and Robotics 2018
DOI: 10.5220/0006902405040511
|View full text |Cite
|
Sign up to set email alerts
|

An LSTM-based Descriptor for Human Activities Recognition using IMU Sensors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…For LSTMs, the number of trainable parameters is high compared to other deep learning architectures. As a result, the tuning of LSTM parameters can be challenging [53]. We propose an architecture with a single LSTM layer.…”
Section: D Convolutional Neural Netmentioning
confidence: 99%
“…For LSTMs, the number of trainable parameters is high compared to other deep learning architectures. As a result, the tuning of LSTM parameters can be challenging [53]. We propose an architecture with a single LSTM layer.…”
Section: D Convolutional Neural Netmentioning
confidence: 99%
“…Recently, the Long Short‐Term Memory (LSTM) network and its variant have shown good performance in tackling various sequence modeling tasks. The LSTM deep neural network has been widely used for human activity recognition . An LSTM layer is a recurrent neural network (RNN) layer, which supports time and data series in the network.…”
Section: Location‐related Activity Detection and Positioning In Indoomentioning
confidence: 99%
“…An LSTM layer is a recurrent neural network (RNN) layer, which supports time and data series in the network. The greatest advantage of the RNNs is their capability to take contextual information into consideration when mapping between input and output sequences through hidden layer‐units . LSTM can automatically extract useful features and model the inexplicit criterion.…”
Section: Location‐related Activity Detection and Positioning In Indoomentioning
confidence: 99%