Growing life expectancy and increasing incidence of multiple chronic health conditions are significant societal challenges. Different technologies have been proposed to address these issues, to detect critical events such as stroke or falls, and to monitor automatically human activities for health condition inference and anomalies detection. This paper aims to investigate two types of sensing technologies proposed for assisted living: wearable and radar sensors. First, different feature selection methods are validated and compared in terms of accuracy and computational loads. Then, information fusion is applied to enhance activity classification accuracy combining the two sensors. Improvements in classification accuracy of approximately 12% using feature level fusion is achieved with both Support Vector Machine and K Nearest Neighbor classifiers. Decision-level fusion schemes are also investigated, yielding classification accuracy in the order of 97-98%.
This article presents radar signal processing for sensing in the context of assisted living. This is covered through 3 example applications: human activity recognition for activities of daily living, respiratory disorder and Sleep Stages classification. The common challenge of classification is discussed within a framework of measurements/pre-processing, feature extraction, and classification algorithms for supervised learning. Then, the specific challenges of the 3 applications from a signal processing standpoint are detailed in their specific data processing and ad-hoc classification strategies, focusing on recent trends in the field of activity recognition (multidomain, multi-modal and fusion) and healthcare applications based on vital signs (super-resolution techniques) and commenting on outstanding challenges. To conclude, this paper explores the challenge of the real-time implementation of signal processing/classification algorithms.
This paper presents a framework based on multi-layer bi-LSTM network (bidirectional Long Short-Term Memory) for multimodal sensor fusion to sense and classify daily activities' patterns and high-risk events such as falls. The data collected in this work are continuous activity streams from FMCW radar and three wearable inertial sensors on the wrist, waist, and ankle. Each activity has a variable duration in the data stream so that the transitions between activities can happen at random times within the stream, without resorting to conventional fixed-duration snapshots. The proposed bi-LSTM implements soft feature fusion between wearable sensors and radar data, as well as two robust hard-fusion methods using the confusion matrices of both sensors. A novel hybrid fusion scheme is then proposed to combine soft and hard fusion to push the classification performances to approximately 96% accuracy in identifying continuous activities and fall events. These fusion schemes implemented with the proposed bi-LSTM network are compared with conventional sliding window approach, and all are validated with realistic "leaving one participant out" (L1PO) method (i.e. testing subjects unknown to the classifier). The developed hybrid-fusion approach is capable of stabilizing the classification performance among different participants in terms of reducing accuracy variance of up to 18.1% and increasing minimum, worst-case accuracy up to 16.2%.
Recognition of human movements with radar for ambient activity monitoring is a developed area of research that yet presents outstanding challenges to address. In real environments, activities and movements are performed with seamless motion, with continuous transitions between activities of different duration and a large range of dynamic motions, compared with discrete activities of fixed-time lengths which are typically analysed in the literature. This paper proposes a novel approach based on recurrent LSTM and Bi-LSTM network architectures for continuous activity monitoring and classification. This approach uses radar data in the form of a continuous temporal sequence of micro-Doppler or range-time information, differently from from other conventional approaches based on convolutional networks that interpret the radar data as images. Experimental radar data involving 15 participants and different sequences of 6 actions are used to validate the proposed approach. It is demonstrated that using the Dopplerdomain data together with the Bi-LSTM network and an optimal learning rate can achieve over 90% mean accuracy, whereas range-domain data only achieved approximately 76%. The details of the network architectures, insights in their behaviour as a function of key hyper-parameters such as the learning rate, and a discussion on their performance across are provided in the paper.
Radar-based human motion recognition is crucial for many applications such as surveillance, search and rescue operations, smart homes, and assisted living. Continuous human motion recognition in real-living environment is necessary for practical deployment, i.e. classification of a sequence of activities transitioning one into another, rather than individual activities. In this paper, a novel Dynamic Range-Doppler Trajectory (DRDT) method based on frequency-modulated continuous-wave (FMCW) radar system is proposed to recognize continuous human motions with various conditions emulating real-living environment. This method can separate continuous motions and process them as single events. First, range-Doppler frames consisting of a series of range-Doppler maps are obtained from the backscattered signals. Next, the DRDT is extracted from these frames to monitor human motions in time, range and Doppler domains in real time. Then, a peak search method is applied to locate and separate each human motion from the DRDT map. Finally, range, Doppler, radar crosssection (RCS) and dispersion features are extracted and combined in a multi-domain fusion approach as inputs to a machine learning classifier. This achieves accurate and robust recognition even when in various conditions of distance, view angle, direction and individual diversity. Extensive experiments have been conducted to show its feasibility and superiority by obtaining an average accuracy of 91.9% on continuous classification. Index Terms-Continuous human motion recognition, DRDT method, fusion of multi-domain features, FMCW radar, machine learning I. INTRODUCTION H UMAN motion recognition has attracted great interests for different purposes such as surveillance, search and rescue
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.