Smart devices, such as smartphones, smartwatches, etc., are examples of promising platforms for automatic recognition of human activities. However, it is difficult to accurately monitor complex human activities on these platforms due to interclass pattern similarities, which occur when different human activities exhibit similar signal patterns or characteristics. Current smartphone-based recognition systems depend on traditional sensors, such as accelerometers and gyroscopes, which are built-in in these devices. Therefore, apart from using information from the traditional sensors, these systems lack the contextual information to support automatic activity recognition. In this article, we explore environmental contexts, such as illumination (light conditions) and noise level, to support sensory data obtained from the traditional sensors using a hybrid of Convolutional Neural Network and Long Short-Term Memory (CNN–LSTM) learning models. The models performed sensor fusion by augmenting low-level sensor signals with rich contextual data to improve the models’ recognition accuracy and generalization. Two sets of experiments were performed to validate the proposed solution. The first set of experiments used triaxial inertial sensing signals to train baseline models, while the second set of experiments combined the inertial signals with contextual information from environmental sensors. The obtained results demonstrate that contextual information, such as environmental noise level and light conditions using hybrid deep learning models, achieved better recognition accuracy than the traditional baseline activity recognition models without contextual information.
Smart devices such as smartphones, smartwatches, etc. are promising platforms that are being used for automatic recognition of human activities. However, it is difficult to accurately monitor complex human activities due to inter-class pattern similarity, which occurs when different human activities exhibit similar signal patterns or characteristics. Current smartphone-based recognition systems depend on the traditional sensors such as accelerometer and gyroscope, which are inbuilt in these devices. Therefore, apart from using information from the traditional sensors, these systems lack contextual information to support automatic activity recognition. In this article, we explore environment contexts such as illumination(light conditions) and noise level to support sensory data obtained from the traditional sensors using a hybrid of Convolutional Neural Networks and Long Short Time Memory(CNN_LSTM) learning models. The models performed sensor fusion by augmenting the low-level sensor signals with rich contextual data to improve recognition and generalisation ability of the proposed solution. Two sets of experiments were performed to validate the proposed solution. The first set of experiments used inertial sensing data whilst the second set of extensive experiments combined inertial signals with contextual information from environment sensing data. Obtained results demonstrate that contextual information such as environment noise level and illumination using hybrid deep learning models achieved better recognition accuracy than the traditional activity recognition models without contextual information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.