A multichannel autoencoder deep learning approach is developed to address the present intrusion detection systems’ detection accuracy and false alarm rate. First, two separate autoencoders are trained with average traffic and assault traffic. The original samples and the two additional feature vectors comprise a multichannel feature vector. Next, a one-dimensional convolution neural network (CNN) learns probable relationships across channels to better discriminate between ordinary and attack traffic. Unaided multichannel characteristic learning and supervised cross-channel characteristic dependency are used to develop an effective intrusion detection model. The scope of this research is that the method described in this study may significantly minimize false positives while also improving the detection accuracy of unknown attacks, which is the focus of this paper. This research was done in order to improve intrusion detection prediction performance. The autoencoder can successfully reduce the number of features while also allowing for easy integration with different neural networks; it can reduce the time it takes to train a model while also improving its detection accuracy. An evolutionary algorithm is utilized to discover the ideal topology set of the CNN model to maximize the hyperparameters and improve the network’s capacity to recognize interchannel dependencies. This paper is based on the multichannel autoencoder’s effectiveness; the fourth experiment is a comparative analysis, which proves the benefits of the approach in this article by correlating it to the findings of various different intrusion detection methods. This technique outperforms previous intrusion detection algorithms in several datasets and has superior forecast accuracy.
Weighted MR images of 421 patients with nasopharyngeal cancer were obtained at the head and neck level, and the tumors in the images were assessed by two expert doctors. 346 patients’ multimodal pictures and labels served as training sets, whereas the remaining 75 patients’ multimodal images and labels served as independent test sets. Convolutional neural network (CNN) for modal multidimensional information fusion and multimodal multidimensional information fusion (MMMDF) was used. The three models’ performance is compared, and the findings reveal that the multimodal multidimensional fusion model performs best, while the two-modal multidimensional information fusion model performs second. The single-modal multidimensional information fusion model has the poorest performance. In MR images of nasopharyngeal cancer, a convolutional network can precisely and efficiently segment tumors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.