2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) 2020
DOI: 10.1109/percomworkshops48775.2020.9156264
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Unidirectional and Bidirectional LSTM Networks for Human Activity Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(12 citation statements)
references
References 22 publications
0
9
0
Order By: Relevance
“…The advantage of using an LSTM network over a recurrent neural network (RNN) is that LSTM can capture the temporal dependency of input sequences during the training process [ 21 , 34 ]. An LSTM network is an RNN that prevents the long-term dependency problem by utilizing gates and calculating a hidden state with an enhanced function [ 17 ]. The building blocks of LSTM networks are LSTM cells, which means that an LSTM layer consists of recurrently connected cells [ 16 , 17 , 34 ].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The advantage of using an LSTM network over a recurrent neural network (RNN) is that LSTM can capture the temporal dependency of input sequences during the training process [ 21 , 34 ]. An LSTM network is an RNN that prevents the long-term dependency problem by utilizing gates and calculating a hidden state with an enhanced function [ 17 ]. The building blocks of LSTM networks are LSTM cells, which means that an LSTM layer consists of recurrently connected cells [ 16 , 17 , 34 ].…”
Section: Methodsmentioning
confidence: 99%
“…For example, an original RNN is incapable of remembering past data, while a long short-term memory (LSTM) network is a modified version of RNN-based networks capable of remembering and learning from past data [ 16 ]. This means that LSTMs are suitable for making a prediction model when the datasets are in the form of time series due to their ability to learn temporal dependencies by employing a gating mechanism for data analysis [ 17 ]. Moreover, LSTM solves the vanishing gradient problem, unlike the original RNN [ 18 ].…”
Section: Introductionmentioning
confidence: 99%
“…This enables the network to have the sequence information in both directions and is known to perform better than unidirectional LSTM models in other sequence modeling tasks. [86][87][88] The specific RNN model employed here had three bidirectional LSTM layers with 4dimensional hidden states followed by a fully connected linear layer. The model was again trained on the sequence vectors.…”
Section: F Representation Learningmentioning
confidence: 99%
“…We specifically used LSTM layers in a bidirectional architecture, i.e., the network included forward and backward LSTM layers that were simultaneously trained, so that both time directions (past and future) were considered and contributed to the outcome [33]. The bidirectional architecture improved model performance in problems dealing with inertial sensor data, e.g., in [25], or in [39]. In the gait analysis domain, double integration with linear dedrifting or direct and reverse integration are typically applied to bound integration errors, considering that the foot should have a velocity of zero in the initial and final instants of the stride [19,40,41].…”
Section: Network Architecturementioning
confidence: 99%