2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN) 2016
DOI: 10.1109/ipin.2016.7743581
|View full text |Cite
|
Sign up to set email alerts
|

Binarized-BLSTM-RNN based Human Activity Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 104 publications
(60 citation statements)
references
References 12 publications
0
60
0
Order By: Relevance
“…(Inoue et al, 2016) investigated several model parameters first and then proposed a relatively good model which can perform HAR with high throughput. (Edel and Köppe, 2016) proposed a binarized-BLSTM-RNN model, in which the weight parameters, input, and output of all hidden layers are all binary values. The main line of RNN based HAR models is dealing with resource-constrained environments while still achieve good performance.…”
Section: Recurrent Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…(Inoue et al, 2016) investigated several model parameters first and then proposed a relatively good model which can perform HAR with high throughput. (Edel and Köppe, 2016) proposed a binarized-BLSTM-RNN model, in which the weight parameters, input, and output of all hidden layers are all binary values. The main line of RNN based HAR models is dealing with resource-constrained environments while still achieve good performance.…”
Section: Recurrent Neural Networkmentioning
confidence: 99%
“…Sensor Modality Deep Model Application Dataset (Almaslukh et al, 2017) Body-worn SAE ADL D03 (Alsheikh et al, 2016) Body-worn RBM ADL, factory, Parkinson D02, D06, D14 Body-worn, ambiemt RBM Gesture, ADL, transportation Self, D01 (Chen and Xue, 2015) Body-worn CNN ADL Self (Chen et al, 2016b) Body-worn CNN ADL D06 (Cheng and Scotland, 2017) Body-worn DNN Parkinson Self (Edel and Köppe, 2016) Body-worn RNN ADL D01, D04, Self (Fang and Hu, 2014) Object, ambient DBN ADL Self (Gjoreski et al, 2016) Body-worn CNN ADL Self, D01 (Guan and Ploetz, 2017) Body-worn, object, ambient RNN ADL, smart home D01, D02, D04 (Ha et al, 2015) Body-worn CNN Factory, health D02, D13 (Ha and Choi, 2016) Body-worn CNN ADL, health D13 (Hammerla et al, 2015) Body-worn RBM Parkinson Self (Hammerla et al, 2016) Body-worn, object, ambient DNN, CNN, RNN ADL, smart home, gait D01, D04, D14 (Hannink et al, 2017) Body-worn CNN Gait Self (Hayashi et al, 2015) Body-worn, ambient RBM ADL, smart home D16 (Inoue et al, 2016) Body-worn RNN ADL D16 (Jiang and Yin, 2015) Body-worn CNN ADL D03, D05, D11 (Khan et al, 2017) Ambient CNN Respiration Self (Kim and Toomajian, 2016) Ambient CNN Hand gesture Self (Kim and Li, 2017) Body-worn CNN ADL Self Body-worn, ambient RBM ADL, emotion Self Ambient RBM ADL Self (Lee et al, 2017) Body-worn CNN ADL Self (Li et al, 2016a) Object RBM Patient resuscitation Self (Li et al, 2016b) Object CNN Patient resuscitation Self (Li et al, 2014) Body-worn SAE ADL D03 Body-worn CNN, RBM ADL Self (Mohammed and Tashev, 2017) Body-worn CNN ADL, gesture Self (Morales and Roggen, 2016) Body-worn CNN ADL, smart home D01, D02 (Murad and Pyun, 2017) Body-worn RNN ADL, smart home D01, D02, D05, D14 (Ordóñez and Roggen, 2016) Body-worn CNN, RNN ADL, gesture, posture, factory D01, D02 (Panwar et al, 2017) Body-worn CNN ADL Self (Plötz et al, 2011) Body-worn, object RBM ADL, food preparation, factory D01, D02, D08, D14…”
Section: Literaturementioning
confidence: 99%
“…Ordóñez and Roggen architect an advanced ConvLSTM to fuse data gathered from multiple sensors and perform activity recognition [112]. By leveraging CNN and LSTM structures, ConvLSTMs can automatically compress spatio-temporal sensor data into low-dimensional [236] Mobile ear Edge-based CNN Jindal [237] Heart rate prediction Cloud-based DBN Kim et al [238] Cytopathology classification Cloud-based CNN Sathyanarayana et al [239] Sleep quality prediction Cloud-based MLP, CNN, LSTM Li and Trocan [240] Health conditions analysis Cloud-based Stacked AE Hosseini et al [241] Epileptogenicity localisation Cloud-based CNN Stamate et al [242] Parkinson's symptoms management Cloud-based MLP Quisel et al [243] Mobile health data analysis Cloud-based CNN, RNN Khan et al [244] Respiration [250] Facial recognition Cloud-based CNN Wu et al [291] Mobile visual search Edge-based CNN Rao et al [251] Mobile augmented reality Edge-based CNN Ohara et al [290] WiFi-driven indoor change detection Cloud-based CNN,LSTM Zeng et al [252] Activity recognition Cloud-based CNN, RBM Almaslukh et al [253] Activity recognition Cloud-based AE Li et al [254] RFID-based activity recognition Cloud-based CNN Bhattacharya and Lane [255] Smart watch-based activity recognition Edge-based RBM Antreas and Angelov [256] Mobile surveillance system Edge-based & Cloud based CNN Ordóñez and Roggen [112] Activity recognition Cloud-based ConvLSTM Wang et al [257] Gesture recognition Edge-based CNN, RNN Gao et al [258] Eating detection Cloud-based DBM, MLP Zhu et al [259] User energy expenditure estimation Cloud-based CNN, MLP Sundsøy et al [260] Individual income classification Cloud-based MLP Chen and Xue [261] Activity recognition Cloud-based CNN Ha and Choi [262] Activity recognition Cloud-based CNN Edel and Köppe [263] Activity recognition Edge-based Binarized-LSTM Okita and Inoue [266] Multiple overlapping activities recognition Cloud-based CNN+LSTM Alsheikh et al…”
Section: Mobilementioning
confidence: 99%
“…Second, DL models can be reused for similar tasks, which makes HAR model construction more efficient. Different DL models such as deep neural networks [26,27], convolutional neural networks [10,28], autoencoders [11,29], restricted Boltzmann machines [12,30], and recurrent neural networks [31,32] have been applied in HAR. We refer readers to [8] for more details on DL-based HAR.…”
Section: Human Activity Recognitionmentioning
confidence: 99%