Seed scheduling, which determines which seed is input to the fuzzer first and the number of mutated test cases that are generated for the input seed, significantly influences crash detection performance in fuzz testing. Even for the same fuzzer, the performance in terms of detecting crashes that cause program failure varies considerably depending on the seed-scheduling method used. Most existing coverage-guided fuzzers use a heuristic seed-scheduling method. These heuristic methods can't properly determine the seed with a high potential to cause the crash; thus, the fuzzer detects the crash inefficiently. Moreover, the fuzzer's crash detection performance is affected by the characteristics of target programs. To address this problem, we propose a general-purpose reinforced seed-scheduling method that not only improves the crash detection performance of fuzz testing but also remains unaffected by the characteristics of the target program. The fuzzer with the proposed method detected the most crashes in all but one of the target programs in which crashes were detected in the experimental results conducted on various programs, and showed better crash detection efficiency than the comparison targets overall.
Artificial intelligence (AI) is increasingly being utilized in cybersecurity, particularly for detecting malicious applications. However, the black-box nature of AI models presents a significant challenge. This lack of transparency makes it difficult to understand and trust the results. In order to address this, it is necessary to incorporate explainability into the detection model. There is insufficient research to provide reasons why applications are detected as malicious or explain their behavior. In this paper, we propose a method of a Vision Transformer(ViT)-based malware detection model and malicious behavior extraction using an attention map to achieve high detection accuracy and high interpretability. Malware detection uses a ViT-based model, which takes an image as input. ViT offers a significant advantage for image detection tasks by leveraging attention mechanisms, enabling robust interpretation and understanding of the intricate patterns within the images. The image is converted from an application. An attention map is generated with attention values generated during the detection process. The attention map is used to identify factors that the model deems important. Class and method names are extracted and provided based on the identified factors. The performance of the detection was validated using real-world datasets. The malware detection accuracy was 80.27%, which is a high level of accuracy compared to other models used for image-based malware detection. The interpretability was measured in the same way as the F1-score, resulting in an interpretability score of 0.70. This score is superior to existing interpretable machine learning (ML)-based methods, such as Drebin, LIME, and XMal. By analyzing malicious applications, we also confirmed that the extracted classes and methods are related to malicious behavior. With the proposed method, security experts can understand the reason behind the model’s detection and the behavior of malicious applications. Given the growing importance of explainable artificial intelligence in cybersecurity, this method is expected to make a significant contribution to this field.
As the complexity and scale of the network environment increase continuously, various methods to detect attacks and intrusions from network traffic by classifying normal and abnormal network behaviors show their limitations. The number of network traffic signatures is increasing exponentially to the extent that semi-realtime detection is not possible. However, machine learning-based intrusion detection only gives simple guidelines as simple contents of security events. This is why security data for a specific environment cannot be configured due to data noise, diversification, and continuous alteration of a system and network environments. Although machine learning is performed and evaluated using a generalized data set, its performance is expected to be similar in that specific network environment only. In this study, we propose a high-speed outlier detection method for a network dataset to customize the dataset in real-time for a continuously changing network environment. The proposed method uses an ensemble-based noise data filtering model using the voting results of 6 classifiers (decision tree, random forest, support vector machine, naive Bayes, k-nearest neighbors, and logistic regression) to reflect the distribution and various environmental characteristics of datasets. Moreover, to prove the performance of the proposed method, we experimented with the accuracy of attack detection by gradually reducing the noise data in the time series dataset. As a result of the experiment, the proposed method maintains a training dataset of a size capable of semi-real-time learning, which is 10% of the total training dataset, and at the same time, shows the same level of accuracy as a detection model using a large training dataset. The improved research results would be the basis for automatic tuning of network datasets and machine learning that can be applied to special-purpose environments and devices such as ICS environments.
Deep neural networks (DNNs) and Convolutional neural networks (CNNs) have improved accuracy in many Artificial Intelligence (AI) applications. Some of these applications are recognition and detection tasks, such as speech recognition, facial recognition and object detection. On the other hand, CNN computation requires complex arithmetic and a lot of memory access time; thus, designing new hardware that would increase the efficiency and throughput without increasing the hardware cost is much more critical. This area in hardware design is very active and will continue to be in the near future. In this paper, we propose a novel 8T XNOR-SRAM design for Binary/Ternary DNNs (TBNs) directly supporting the XNOR-Network and the TBN DNNs. The proposed SRAM Computing-in-Memory (CIM) can operate in two modes, the first of which is the conventional 6T SRAM, and the second is the XNOR mode. By adding two extra transistors to the conventional 6T structure, our proposed CIM showed an improvement up to 98% for power consumption and 90% for delay compared to the existing state-of-the-art XNOR-CIM.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.