Tracking human physical activity using smartphones is an emerging trend in healthcare monitoring and healthy lifestyle management. Neural networks are broadly used to analyze the inertial data of activity recognition. Inspired by the autoencoder neural networks, we propose a layer-wise network, namely principal coefficient encoder model (PCEM). Unlike the vanilla neural networks which apply random weight initialization andback-propagation for parameter updating, an optimized weight initialization is implemented in PCEM via principal coefficient learning. This principal coefficient encoding allows rapid data learning with no back-propagation intervention and no gigantic hyperparameter tuning. In PCEM, the most principal coefficients of the training data are determined to be the network weights. Two hidden layers with principal coefficient encoding are stacked in PCEM for the sake of deep architecture design. The performance of PCEM is evaluated based on a subject-independent protocol where training and testing samples are from different users, with no overlapping subjects in between the training and testing sets. This subject-independent protocol can better assess the generalization of the model to new data. Experimental results exhibit that PCEM outperforms certain state-of-the-art machine learning and deep learning models, including convolutional neural network, and deep belief network. PCEM can achieve ~97% accuracy in subject-independent human activity analysis.
Background: In recent years, human activity recognition (HAR) has been an active research topic due to its widespread application in various fields such as healthcare, sports, patient monitoring, etc. HAR approaches can be categorised as handcrafted feature methods (HCF) and deep learning methods (DL). HCF involves complex data pre-processing and manual feature extraction in which the models may be exposed to high bias and crucial implicit pattern loss. Hence, DL approaches are introduced due to their exceptional recognition performance. Convolutional Neural Network (CNN) extracts spatial features while preserving localisation. However, it hardly captures temporal features. Recurrent Neural Network (RNN) learns temporal features, but it is susceptible to gradient vanishing and suffers from short-term memory problems. Unlike RNN, Long-Short Term Memory network has a relatively longer-term dependency. However, it consumes higher computation and memory because it computes and stores partial results at each level. Methods: This work proposes a novel multiscale temporal convolutional network (MSTCN) based on the Inception model with a temporal convolutional architecture. Unlike HCF methods, MSTCN requires minimal pre-processing and no manual feature engineering. Further, multiple separable convolutions with different-sized kernels are used in MSTCN for multiscale feature extraction. Dilations are applied to each separable convolution to enlarge the receptive fields without increasing the model parameters. Moreover, residual connections are utilised to prevent information loss and gradient vanishing. These features enable MSTCN to possess a longer effective history while maintaining a relatively low in-network computation. Results: The performance of MSTCN is evaluated on UCI and WISDM datasets using subject independent protocol with no overlapping subjects between the training and testing sets. MSTCN achieves F1 scores of 0.9752 on UCI and 0.9470 on WISDM. Conclusion: The proposed MSTCN dominates the other state-of-the-art methods by acquiring high recognition accuracies without requiring any manual feature engineering.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.