Distributed network attacks are often referred to as Distributed Denial of Service (DDoS) attacks. These attacks take advantage of specific limitations that apply to any arrangement asset, such as the framework of the authorized organization's site. In the existing research study, the author worked on an old KDD dataset. It is necessary to work with the latest dataset to identify the current state of DDoS attacks. This paper, used a machine learning approach for DDoS attack types classification and prediction. For this purpose, used Random Forest and XGBoost classification algorithms. To access the research proposed a complete framework for DDoS attacks prediction. For the proposed work, THE UNWS-np-15 dataset from GitHub and python used as a simulator. After applying the machine learning models generated a confusion matrix for model performance identification. In the first classification, the results showed that both Precision (PR) and Recall (RE) are 89% for Random Forest Algorithm. The average Accuracy (AC) of model is 89% which is extremely good. In the second classification, the results showed that both Precision (PR) and Recall (RE) are 90% for XGBoost. The average Accuracy (AC) of model is 90%. By comparing work to existing research work, the accuracy of defect determination improved as compare to existing research work which is 85% and 79%.
Performance and classifications of the human speech recognition is bi-modular in nature and the expansion of visual data from the speaker's mouth area has been appeared to expand the presentation of the automatic speech recognition ASR frameworks. The actual performance and classifications of the audio visual speech recognitions break down quickly within the sight of even moderate commotion, however can be high quality by including visual data from the speaker mouth region. Therefore, the new methodology taken in this paper is to consolidate dynamic data caught from the speaker mouth happening during progressive casings of video got during expressed discourse. Furthermore, the audio only, visual only and audio visual recognizers were contemplated within the sight of commotion and demonstrate that the broad media recognizer has increasingly dynamic implementation.
Facial expression recognition has been a hot topic for decades, but high intraclass variation makes it challenging. To overcome intraclass variation for visual recognition, we introduce a novel fusion methodology, in which the proposed model first extract features followed by feature fusion. Specifically, RestNet-50, VGG-19, and Inception-V3 is used to ensure feature learning followed by feature fusion. Finally, the three feature extraction models are utilized using Ensemble Learning techniques for final expression classification. The representation learnt by the proposed methodology is robust to occlusions and pose variations and offers promising accuracy. To evaluate the efficiency of the proposed model, we use two wild benchmark datasets Real-world Affective Faces Database (RAF-DB) and AffectNet for facial expression recognition. The proposed model classifies the emotions into seven different categories namely: happiness, anger, fear, disgust, sadness, surprise, and neutral. Furthermore, the performance of the proposed model is also compared with other algorithms focusing on the analysis of computational cost, convergence and accuracy based on a standard problem specific to classification applications.
Diabetic retinopathy (DR) is a type of eye disease that may be caused in individuals suffering from diabetes which results in vision loss. DR identification and routine diagnosis is a challenging task and may need several screenings. Early identification of DR has the potential to prevent or delay vision loss. For real-time applications, an automated DR identification approach is required to assist and reduce possible human mistakes. In this research work, we propose a deep neural network and genetic algorithm-based feature selection approach. Five advanced convolutional neural network architectures are used to extract features from the fundus images, i.e., AlexNet, NASNet-Large, VGG-19, Inception V3, and ShuffleNet, followed by genetic algorithm for feature selection and ranking features into high rank (optimal) and lower rank (unsatisfactory). The nonoptimal feature attributes from the training and validation feature vectors are then dropped. Support vector machine- (SVM-) based classification model is used to develop diabetic retinopathy recognition model. The model performance is evaluated using accuracy, precision, recall, and F1 score. The proposed model is tested on three different datasets: the Kaggle dataset, a self-generated custom dataset, and an enhanced custom dataset with 97.9%, 94.76%, and 96.4% accuracy, respectively. In the enhanced custom dataset, data augmentation has been performed due to the smaller size of the dataset and to eliminate the noise in fundus images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.