The accurate classification of activity patterns based on radar signatures is still an open problem and is key to detect anomalous behavior for security and health applications. This paper presents a novel iterative convolutional neural networks strategy with an autocorrelation pre-processing instead of the traditional micro-Doppler image pre-processing to classify activities or subjects accurately. The proposed strategy uses an iterative deep learning framework for the automatic definition and extraction of features. This is followed by a traditional supervised learning classifier to label the different activities. Using three human subjects and their real motion captured data, twelve thousand radar signatures were simulated by varying additive white Gaussian noise. Additionally, 6720 experimental radar signatures were captured with a frequency-modulated continuous radar at 5.8GHz with 400M Hz of instantaneous bandwidth from seven activities using one subject and 4800 signatures from five subjects while walking. The simulated and experimental data were both used to validate our proposed method. With SNR varying from −20 to 20dB with 88.74% average accuracy at −10dB and 100% peak accuracy at 15dB. The proposed Iterative Convolutional Neural Networks followed with Random Forests (ICNNRF) does not only outperform the feature-based methods using micro-Doppler images but also the classification methods using other types of supervised classifiers after our proposed iterative convolutional neural network.
Radar micro-Doppler signatures have been proposed for human monitoring and activity classification for surveillance and outdoor security, as well as for ambient assisted living in healthcare-related applications. A known issue is the performance reduction when the target is moving tangentially to the line of sight of the radar. Multiple techniques have been proposed to address this, such as multistatic radar and to some extent, interferometric (IF) radar. A simulator is presented to generate synthetic data representative of eight radar systems (monostatic, circular multistatic and in-line multistatic [IM] and IF) to quantify classification performances as a function of aspect angles and deployment geometries. This simulator allows an unbiased performance evaluation of different radar systems. Six human activities are considered with signatures originating from motion-captured data of 14 different subjects. The classification performances are analysed as a function of aspect angles ranging from 0°to 90°per activity and overall. It demonstrates that IF configurations are more robust than IM configurations. However, IM performs better at angles below 55°b efore IF configurations take over. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
This paper presents a novel approach that applies WiFi-based IQ data and time–frequency images to classify human activities automatically and accurately. The proposed strategy first uses the Choi–Williams distribution transform and the Margenau–Hill spectrogram transform to obtain the time–frequency images, followed by the offset and principal component analysis (PCA) feature extraction. The offset features were extracted from the IQ data and several spectra with maximum energy values in the time domain, and the PCA features were extracted via the whole images and several image slices on them with rich unit information. Finally, a traditional supervised learning classifier was used to label various activities. With twelve-thousand experimental samples from four categories of WiFi signals, the experimental data validated our proposed method. The results showed that our method was more robust to varying image slices or PCA numbers over the measured dataset. Our method with the random forest (RF) classifier surpassed the method with alternative classifiers on classification performance and finally obtained a 91.78% average sensitivity, 91.74% average precision, 91.73% average F1-score, 97.26% average specificity, and 95.89% average accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.