Growing life expectancy and increasing incidence of multiple chronic health conditions are significant societal challenges. Different technologies have been proposed to address these issues, to detect critical events such as stroke or falls, and to monitor automatically human activities for health condition inference and anomalies detection. This paper aims to investigate two types of sensing technologies proposed for assisted living: wearable and radar sensors. First, different feature selection methods are validated and compared in terms of accuracy and computational loads. Then, information fusion is applied to enhance activity classification accuracy combining the two sensors. Improvements in classification accuracy of approximately 12% using feature level fusion is achieved with both Support Vector Machine and K Nearest Neighbor classifiers. Decision-level fusion schemes are also investigated, yielding classification accuracy in the order of 97-98%.
This paper presents a framework based on multi-layer bi-LSTM network (bidirectional Long Short-Term Memory) for multimodal sensor fusion to sense and classify daily activities' patterns and high-risk events such as falls. The data collected in this work are continuous activity streams from FMCW radar and three wearable inertial sensors on the wrist, waist, and ankle. Each activity has a variable duration in the data stream so that the transitions between activities can happen at random times within the stream, without resorting to conventional fixed-duration snapshots. The proposed bi-LSTM implements soft feature fusion between wearable sensors and radar data, as well as two robust hard-fusion methods using the confusion matrices of both sensors. A novel hybrid fusion scheme is then proposed to combine soft and hard fusion to push the classification performances to approximately 96% accuracy in identifying continuous activities and fall events. These fusion schemes implemented with the proposed bi-LSTM network are compared with conventional sliding window approach, and all are validated with realistic "leaving one participant out" (L1PO) method (i.e. testing subjects unknown to the classifier). The developed hybrid-fusion approach is capable of stabilizing the classification performance among different participants in terms of reducing accuracy variance of up to 18.1% and increasing minimum, worst-case accuracy up to 16.2%.
Recognition of human movements with radar for ambient activity monitoring is a developed area of research that yet presents outstanding challenges to address. In real environments, activities and movements are performed with seamless motion, with continuous transitions between activities of different duration and a large range of dynamic motions, compared with discrete activities of fixed-time lengths which are typically analysed in the literature. This paper proposes a novel approach based on recurrent LSTM and Bi-LSTM network architectures for continuous activity monitoring and classification. This approach uses radar data in the form of a continuous temporal sequence of micro-Doppler or range-time information, differently from from other conventional approaches based on convolutional networks that interpret the radar data as images. Experimental radar data involving 15 participants and different sequences of 6 actions are used to validate the proposed approach. It is demonstrated that using the Dopplerdomain data together with the Bi-LSTM network and an optimal learning rate can achieve over 90% mean accuracy, whereas range-domain data only achieved approximately 76%. The details of the network architectures, insights in their behaviour as a function of key hyper-parameters such as the learning rate, and a discussion on their performance across are provided in the paper.
Abstract-Significant research exists on the use of wearable sensors in the context of assisted living for activities recognition and fall detection, whereas radar sensors have been studied only recently in this domain. This paper approaches the performance limitation of using individual sensors, especially for classification of similar activities, by implementing information fusion of features extracted from experimental data collected by different sensors, namely a tri-axial accelerometer, a micro-Doppler radar, and a depth camera. Preliminary results confirm that combining information from heterogeneous sensors improves the overall performance of the system. The classification accuracy attained by means of this fusion approach improves by 11.2% compared to radar-only use, and by 16.9% compared to the accelerometer. Furthermore, adding features extracted from a RGB-D Kinect sensor, the overall classification accuracy increases up to 91.3%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.