Internet of Things (IoT) has become a popular paradigm to fulfil needs of the industry such as asset tracking, resource monitoring and automation. As security mechanisms are often neglected during the deployment of IoT devices, they are more easily attacked by complicated and large volume intrusion attacks using advanced techniques. Artificial Intelligence (AI) has been used by the cyber security community in the past decade to automatically identify such attacks. However, deep learning methods have yet to be extensively explored for Intrusion Detection Systems (IDS) specifically for IoT. Most recent works are based on time sequential models like LSTM and there is short of research in CNNs as they are not naturally suited for this problem. In this article, we propose a novel solution to the intrusion attacks against IoT devices using CNNs. The data is encoded as the convolutional operations to capture the patterns from the sensors data along time that are useful for attacks detection by CNNs. The proposed method is integrated with two classical CNNs: ResNet and EfficientNet, where the detection performance is evaluated. The experimental results show significant improvement in both true positive rate and false positive rate compared to the baseline using LSTM.
Convolutional neural networks (CNNs) have strong feature extraction capability, which have been used to extract features from the hyperspectral image. Local binary pattern (LBP) is a simple but powerful descriptor for spatial features, which can lessen the workload of CNNs and improve the classification accuracy. In order to make full use of the feature extraction capability of CNNs and the discrimination of LBP features, a novel classification method combining dual-channel CNNs and LBP is proposed. Specifically, a one-dimensional CNN (1D-CNN) is adopted to process original hyperspectral data to extract hierarchical spectral features and another same 1D-CNN is applied to process LBP features to further extract spatial features. Then, the concatenation of two fully connected layers from the two CNNs, which fused features, is fed into a softmax classifier to complete the classification. The experimental results demonstrate that the proposed method can provide 98.52%, 99.54% and 99.54% classification accuracy on the Indian Pines, University of Pavia and Salinas data, respectively. And the proposed method can also obtain good performance even with limited training samples.
Though deep convolutional neural networks (CNNs) have made great breakthroughs in the field of vision-based gesture recognition, however it is challenging to deploy these high-performance networks to resource-constrained mobile platforms and acquire large numbers of labeled samples for deep training of CNNs. Furthermore, there are some application scenarios with only a few samples or even a single one for a new gesture class so that the recognition method based on CNNs cannot achieve satisfactory classification performance. In this paper, a well-designed lightweight network based on I3D with spatial-temporal separable 3D convolutions and Fire module is proposed as an effective tool for the extraction of discriminative features. Then some effective capacity by deep training of large samples from related categories can be transferred and utilized to enhance the learning ability of the proposed network instead of training from scratch. In this way, the implementation of one-shot learning hand gesture recognition (OSLHGR) is carried out by a rational decision with distance measure. Moreover, a kind of mechanism of discrimination evolution with innovation of new sample and voting integration based on multi-classifiers is established to improve the learning and classification performance of the proposed method. Finally, a series of experiments and tests on the IsoGD and Jester datasets are conducted to demonstrate the effectiveness of our improved lightweight I3D. Meanwhile, a specific dataset of gestures with variant angles and directions, BSG 2.0, and the ChaLearn gesture dataset (CGD) are used for the test of OSLHGR. The results on different experiment platforms verify and validate the performance advantages of satisfied classification and real-time response speed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.