Anomaly detection in network traffic is one of the key techniques to ensure security in future networks. Today, the importance of this topic is even higher, since the network traffic is growing and there is a need to have smart algorithms, which can automatically adapt to new network conditions, detect threats and recognize the type of the possible network attack. Nowadays, there are a lot of different approaches, some of them have reached relatively sufficient accuracy. However, the majority of works are being tested on old datasets, which do not reflect current network conditions and it leads to overfitted results. This is caused by high redundancy of the data and because they fail to reflect the performance of the latest methods in the real-world anomaly detection applications. In this work, we applied a couple of new methods based on convolutional neural networks: U-Net based and Temporal convolutional network based for network attack classification. We trained and evaluated methods on the old dataset KDD99 and the modern large-scale one CSE-CIC-IDS2018. According to results, Temporal convolutional network with LSTM has achieved accuracy 92% and 97% on the KDD99 and the CSE-CIC-IDS2018 respectively, the U-Net model has accuracy 93% and 94% on the KDD99 and the CSE-CIC-IDS2018 respectively. Additionally, we utilized the focal loss function in the Temporal convolutional network with Long Short-Term Memory model, which has positive effect on class imbalance in time-series data. We showed, that the Temporal convolutional network in combination with Long Short-Term Memory network and U-Net model can give higher accuracy compared to other network architectures for network traffic classification. In this work we also proved, that methods trained on the old dataset can easily overfit during training and achieve relatively good results on the testing set, but at the same time, these methods are not so successful on more complex and actual data.
Pulmonary fibrosis is one of the most severe long-term consequences of COVID-19. Corticosteroid treatment increases the chances of recovery; unfortunately, it can also have side effects. Therefore, we aimed to develop prediction models for a personalized selection of patients benefiting from corticotherapy. The experiment utilized various algorithms, including Logistic Regression, k-NN, Decision Tree, XGBoost, Random Forest, SVM, MLP, AdaBoost, and LGBM. In addition easily human-interpretable model is presented. All algorithms were trained on a dataset consisting of a total of 281 patients. Every patient conducted an examination at the start and three months after the post-COVID treatment. The examination comprised a physical examination, blood tests, functional lung tests, and an assessment of health state based on X-ray and HRCT. The Decision tree algorithm achieved balanced accuracy (BA) of 73.52%, ROC-AUC of 74.69%, and 71.70% F1 score. Other algorithms achieving high accuracy included Random Forest (BA 70.00%, ROC-AUC 70.62%, 67.92% F1 score) and AdaBoost (BA 70.37%, ROC-AUC 63.58%, 70.18% F1 score). The experiments prove that information obtained during the initiation of the post-COVID-19 treatment can be used to predict whether the patient will benefit from corticotherapy. The presented predictive models can be used by clinicians to make personalized treatment decisions.
Forensically trained facial reviewers are still considered as one of the most accurate approaches for person identification from video records. The human brain can utilize information, not just from a single image, but also from a sequence of images (i.e., videos), and even in the case of low-quality records or a long distance from a camera, it can accurately identify a given person. Unfortunately, in many cases, a single still image is needed. An example of such a case is a police search that is about to be announced in newspapers. This paper introduces a face database obtained from real environment counting in 17,426 sequences of images. The dataset includes persons of various races and ages and also different environments, different lighting conditions or camera device types. This paper also introduces a new multi-frame face super-resolution method and compares this method with the state-of-the-art single-frame and multi-frame super-resolution methods. We prove that the proposed method increases the quality of face images, even in cases of low-resolution low-quality input images, and provides better results than single-frame approaches that are still considered the best in this area. Quality of face images was evaluated using several objective mathematical methods, and also subjective ones, by several volunteers. The source code and the dataset were released and the experiment is fully reproducible.
The progress in computational offloading is heavily pushing the development of the modern Information and Communications Technology domain. The growth in resource-constrained Internet of Things devices demands the development of new computational offloading strategies to be sustainably integrated in beyond 5G networks. One of the solutions to said demand is enabling Mobile Edge Computing (MEC) powered by advanced methods of Machine Learning (ML). This paper proposes the application of ML-powered computational offloading strategy in a wireless cellular network by applying the traditional fundamental Travelling Salesman Problem (TSP) on computational offloading location selection. The main specificity of the proposed approach is the use of imagery data. Thus, the paper executes a literature review to identify existing strategies. It further proposes a novel method utilizing the location-like imagery data to identify the most suitable computational location by executing the search for an identified route between locations using the proposed Deep Learning (DL) model. The model was evaluated and achieved MAE -1,575, MSA -10,119,205, R 2 -0.98 on the testing dataset, which outperforms or is comparable with other well-known architectures. Moreover, the training time is proven to be 2-10 times faster. Interestingly, the MAE values are relatively low compared to the target values that should be predicted (despite rather high MSE results), which is confirmed by the almost perfect R 2 value. It is concluded that the proposed neural network can predict the target values, and this solution can be applied to real-world tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.