Vigilance estimation of drivers is a hot research field of current traffic safety. Wearable devices can monitor information regarding the driver’s state in real time, which is then analyzed by a data analysis model to provide an estimation of vigilance. The accuracy of the data analysis model directly affects the effect of vigilance estimation. In this paper, we propose a deep coupling recurrent auto-encoder (DCRA) that combines electroencephalography (EEG) and electrooculography (EOG). This model uses a coupling layer to connect two single-modal auto-encoders to construct a joint objective loss function optimization model, which consists of single-modal loss and multi-modal loss. The single-modal loss is measured by Euclidean distance, and the multi-modal loss is measured by a Mahalanobis distance of metric learning, which can effectively reflect the distance between different modal data so that the distance between different modes can be described more accurately in the new feature space based on the metric matrix. In order to ensure gradient stability in the long sequence learning process, a multi-layer gated recurrent unit (GRU) auto-encoder model was adopted. The DCRA integrates data feature extraction and feature fusion. Relevant comparative experiments show that the DCRA is better than the single-modal method and the latest multi-modal fusion. The DCRA has a lower root mean square error (RMSE) and a higher Pearson correlation coefficient (PCC).
High-dimensional time series classification is a serious problem. A similarity measure based on distance is one of the methods for time series classification. This paper proposes a metric learning-based univariate time series classification method (ML-UTSC), which uses a Mahalanobis matrix on metric learning to calculate the local distance between multivariate time series and combines Dynamic Time Warping(DTW) and the nearest neighbor classification to achieve the final classification. In this method, the features of the univariate time series are presented as multivariate time series data with a mean value, variance, and slope. Next, a three-dimensional Mahalanobis matrix is obtained based on metric learning in the data. The time series is divided into segments of equal intervals to enable the Mahalanobis matrix to more accurately describe the features of the time series data. Compared with the most effective measurement method, the related experimental results show that our proposed algorithm has a lower classification error rate in most of the test datasets.
In complex underwater environments, the single mode of a single sensor cannot meet the precision requirement of object identification, and multisource fusion is currently the mainstream research approach. Deep canonical correlation analysis is an efficient feature fusion method but suffers from problems such as not strong scalability and low efficiency. Therefore, an improved deep canonical correlation analysis fusion method is proposed for underwater multisource sensor data containing noise. First, a denoising autoencoder is used for denoising and to reduce the data dimension to extract new feature expressions of raw data. Second, given that underwater acoustic data can be characterized as 1-dimensional time series, a 1-dimensional convolutional neural network is used to improve the deep canonical correlation analysis model, and multilayer convolution and pooling are implemented to decrease the number of parameters and increase the efficiency. To improve the scalability and robustness of the model, a stochastic decorrelation loss function is used to optimize the objective function, which reduces the algorithm complexity from O(n 3 ) to O(n 2 ). The comparison experiment of the proposed algorithm and other typical algorithms on MNIST containing noise and underwater multisource data in different scenes shows that the proposed algorithm is superior to others regardless of the efficiency or precision of target classification.INDEX TERMS Convolutional neural network, deep canonical correlation analysis, denoising autoencoder, multisource fusion, underwater data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.