The wide spread usage of wearable sensors such as in smart watches has provided continuous access to valuable user generated data such as human motion that could be used to identify an individual based on his/her motion patterns such as, gait. Several methods have been suggested to extract various heuristic and high-level features from gait motion data to identify discriminative gait signatures and distinguish the target individual from others. However, the manual and hand crafted feature extraction is error prone and subjective. Furthermore, the motion data collected from inertial sensors have complex structure and the detachment between manual feature extraction module and the predictive learning models might limit the generalization capabilities. In this paper, we propose a novel approach for human gait identification using time-frequency (TF) expansion of human gait cycles in order to capture joint 2 dimensional (2D) spectral and temporal patterns of gait cycles. Then, we design a deep convolutional neural network (DCNN) learning to extract discriminative features from the 2D expanded gait cycles and jointly optimize the identification model and the spectro-temporal features in a discriminative fashion. We collect raw motion data from five inertial sensors placed at the chest, lower-back, right hand wrist, right knee, and right ankle of each human subject synchronously in order to investigate the impact of sensor location on the gait identification performance. We then present two methods for early (input level) and late (decision score level) multi-sensor fusion to improve the gait identification generalization performance. We specifically propose the minimum error score fusion (MESF) method that discriminatively learns the linear fusion weights of individual DCNN scores at the decision level by minimizing the error rate on the training data in an iterative manner. 10 subjects participated in this study and hence, the problem is a 10-class identification task. Based on our experimental results, 91% subject identification accuracy was achieved using the best individual IMU and 2DTF-DCNN. We then investigated our proposed early and late sensor fusion approaches, which improved the gait identification accuracy of the system to 93.36% and 97.06%, respectively.
A new deep learning-based electroencephalography (EEG) signal analysis framework is proposed. While deep neural networks, specifically convolutional neural networks (CNNs), have gained remarkable attention recently, they still suffer from high dimensionality of the training data. Two-dimensional input images of CNNs are more vulnerable to be redundant versus one-dimensional input time-series of conventional neural networks. In this study, we propose a new dimensionality reduction framework for reducing the dimension of CNN inputs based on the tensor decomposition of the time-frequency representation of EEG signals. The proposed tensor decomposition-based dimensionality reduction algorithm transforms a large set of slices of the input tensor to a concise set of slices which are called super-slices. Employing super-slices not only handles the artifacts and redundancies of the EEG data but also reduces the dimension of the CNNs training inputs. We also consider different timefrequency representation methods for EEG image generation and provide a comprehensive comparison among them. We test our proposed framework on HCB-MIT data and as results show our approach outperforms other previous studies.
One of the main reasons for fatal accidents on the road is distracted driving. The continuous attention of an individual driver is a necessity for the task of driving. While driving, certain levels of distraction can cause drivers to lose their attention, which might lead to an accident. Thus, the number of accidents can be reduced by early detection of distraction. Many studies have been conducted to automatically detect driver distraction. Although camera-based techniques have been successfully employed to characterize driver distraction, the risk of privacy violation is high. On the other hand, physiological signals have shown to be a privacy preserving and reliable indicator of driver state, while the acquisition technology might be intrusive to drivers in practical implementation. In this study, we investigate a continuous measure of phasic Galvanic Skin Responses (GSR) using a wristband wearable to identify distraction of drivers during a driving experiment on-the-road. We first decompose the raw GSR signal into its phasic and tonic components using Continuous Decomposition Analysis (CDA), and then the continuous phasic component containing relevant characteristics of the skin conductance signals is investigated for further analysis. We generated a high resolution spectro-temporal transformation of the GSR signals for non-distracted and distracted (calling and texting) scenarios to visualize the associated behavior of the decomposed phasic GSR signal in correlation with distracted scenarios. According to the spectrogram observations, we extract relevant spectral and temporal features to capture the patterns associated with the distracted scenarios at the physiological level. We then performed feature selection using support vector machine recursive feature elimination (SVM-RFE) in order to: (1) generate a rank of the distinguishing features among the subject population, and (2) create a reduced feature subset toward more efficient distraction identification on the edge at the generalization phase. We employed support vector machine (SVM) to generate the 10-fold cross validation (10-CV) identification performance measures. Our experimental results demonstrated cross-validation accuracy of 94.81% using all the features and the accuracy of 93.01% using reduced feature space. The SVM-RFE selected set of features generated a marginal decrease in accuracy while reducing the redundancy in the input feature space toward shorter response time necessary for early notification of distracted state of the driver.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.