2018
DOI: 10.3390/s18113726
|View full text |Cite
|
Sign up to set email alerts
|

A Robust Deep Learning Approach for Position-Independent Smartphone-Based Human Activity Recognition

Abstract: Recently, modern smartphones equipped with a variety of embedded-sensors, such as accelerometers and gyroscopes, have been used as an alternative platform for human activity recognition (HAR), since they are cost-effective, unobtrusive and they facilitate real-time applications. However, the majority of the related works have proposed a position-dependent HAR, i.e., the target subject has to fix the smartphone in a pre-defined position. Few studies have tackled the problem of position-independent HAR. They hav… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
96
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 96 publications
(97 citation statements)
references
References 42 publications
1
96
0
Order By: Relevance
“…After several rounds of training, the learned features are stacked with labels to form a classifier. (Almaslukh et al, 2017;Wang et al, 2016a) used SAE for HAR, where they first adopted the greedy layer-wise pretraining (Hinton et al, 2006), then performed fine-tuning. Compared to those works, (Li et al, 2014) investigated the sparse autoencoder by adding KL divergence and noise to the cost function, which indicates that adding sparse constraints could improve the performance of HAR.…”
Section: Autoencodermentioning
confidence: 99%
See 1 more Smart Citation
“…After several rounds of training, the learned features are stacked with labels to form a classifier. (Almaslukh et al, 2017;Wang et al, 2016a) used SAE for HAR, where they first adopted the greedy layer-wise pretraining (Hinton et al, 2006), then performed fine-tuning. Compared to those works, (Li et al, 2014) investigated the sparse autoencoder by adding KL divergence and noise to the cost function, which indicates that adding sparse constraints could improve the performance of HAR.…”
Section: Autoencodermentioning
confidence: 99%
“…Sensor Modality Deep Model Application Dataset (Almaslukh et al, 2017) Body-worn SAE ADL D03 (Alsheikh et al, 2016) Body-worn RBM ADL, factory, Parkinson D02, D06, D14 Body-worn, ambiemt RBM Gesture, ADL, transportation Self, D01 (Chen and Xue, 2015) Body-worn CNN ADL Self (Chen et al, 2016b) Body-worn CNN ADL D06 (Cheng and Scotland, 2017) Body-worn DNN Parkinson Self (Edel and Köppe, 2016) Body-worn RNN ADL D01, D04, Self (Fang and Hu, 2014) Object, ambient DBN ADL Self (Gjoreski et al, 2016) Body-worn CNN ADL Self, D01 (Guan and Ploetz, 2017) Body-worn, object, ambient RNN ADL, smart home D01, D02, D04 (Ha et al, 2015) Body-worn CNN Factory, health D02, D13 (Ha and Choi, 2016) Body-worn CNN ADL, health D13 (Hammerla et al, 2015) Body-worn RBM Parkinson Self (Hammerla et al, 2016) Body-worn, object, ambient DNN, CNN, RNN ADL, smart home, gait D01, D04, D14 (Hannink et al, 2017) Body-worn CNN Gait Self (Hayashi et al, 2015) Body-worn, ambient RBM ADL, smart home D16 (Inoue et al, 2016) Body-worn RNN ADL D16 (Jiang and Yin, 2015) Body-worn CNN ADL D03, D05, D11 (Khan et al, 2017) Ambient CNN Respiration Self (Kim and Toomajian, 2016) Ambient CNN Hand gesture Self (Kim and Li, 2017) Body-worn CNN ADL Self Body-worn, ambient RBM ADL, emotion Self Ambient RBM ADL Self (Lee et al, 2017) Body-worn CNN ADL Self (Li et al, 2016a) Object RBM Patient resuscitation Self (Li et al, 2016b) Object CNN Patient resuscitation Self (Li et al, 2014) Body-worn SAE ADL D03 Body-worn CNN, RBM ADL Self (Mohammed and Tashev, 2017) Body-worn CNN ADL, gesture Self (Morales and Roggen, 2016) Body-worn CNN ADL, smart home D01, D02 (Murad and Pyun, 2017) Body-worn RNN ADL, smart home D01, D02, D05, D14 (Ordóñez and Roggen, 2016) Body-worn CNN, RNN ADL, gesture, posture, factory D01, D02 (Panwar et al, 2017) Body-worn CNN ADL Self (Plötz et al, 2011) Body-worn, object RBM ADL, food preparation, factory D01, D02, D08, D14…”
Section: Literaturementioning
confidence: 99%
“…Ordóñez and Roggen architect an advanced ConvLSTM to fuse data gathered from multiple sensors and perform activity recognition [112]. By leveraging CNN and LSTM structures, ConvLSTMs can automatically compress spatio-temporal sensor data into low-dimensional [236] Mobile ear Edge-based CNN Jindal [237] Heart rate prediction Cloud-based DBN Kim et al [238] Cytopathology classification Cloud-based CNN Sathyanarayana et al [239] Sleep quality prediction Cloud-based MLP, CNN, LSTM Li and Trocan [240] Health conditions analysis Cloud-based Stacked AE Hosseini et al [241] Epileptogenicity localisation Cloud-based CNN Stamate et al [242] Parkinson's symptoms management Cloud-based MLP Quisel et al [243] Mobile health data analysis Cloud-based CNN, RNN Khan et al [244] Respiration [250] Facial recognition Cloud-based CNN Wu et al [291] Mobile visual search Edge-based CNN Rao et al [251] Mobile augmented reality Edge-based CNN Ohara et al [290] WiFi-driven indoor change detection Cloud-based CNN,LSTM Zeng et al [252] Activity recognition Cloud-based CNN, RBM Almaslukh et al [253] Activity recognition Cloud-based AE Li et al [254] RFID-based activity recognition Cloud-based CNN Bhattacharya and Lane [255] Smart watch-based activity recognition Edge-based RBM Antreas and Angelov [256] Mobile surveillance system Edge-based & Cloud based CNN Ordóñez and Roggen [112] Activity recognition Cloud-based ConvLSTM Wang et al [257] Gesture recognition Edge-based CNN, RNN Gao et al [258] Eating detection Cloud-based DBM, MLP Zhu et al [259] User energy expenditure estimation Cloud-based CNN, MLP Sundsøy et al [260] Individual income classification Cloud-based MLP Chen and Xue [261] Activity recognition Cloud-based CNN Ha and Choi [262] Activity recognition Cloud-based CNN Edel and Köppe [263] Activity recognition Edge-based Binarized-LSTM Okita and Inoue [266] Multiple overlapping activities recognition Cloud-based CNN+LSTM Alsheikh et al…”
Section: Mobilementioning
confidence: 99%
“…Second, DL models can be reused for similar tasks, which makes HAR model construction more efficient. Different DL models such as deep neural networks [26,27], convolutional neural networks [10,28], autoencoders [11,29], restricted Boltzmann machines [12,30], and recurrent neural networks [31,32] have been applied in HAR. We refer readers to [8] for more details on DL-based HAR.…”
Section: Human Activity Recognitionmentioning
confidence: 99%
“…The capability to automatically extract high-level features makes DL methods largely alleviated from the drawbacks of conventional ML methods. There have been many DL-based HAR works proposed in recent years [10,11,12,13]. However, the training time and the amount of data required for DL systems are always much larger than that of traditional ML systems, and the large time and labor costs makes it difficult to build a large-scale labeled human activity dataset with high quality.…”
Section: Introductionmentioning
confidence: 99%