Many IoT (Internet of Things) systems run Android systems or Android-like systems. With the continuous development of machine learning algorithms, the learning-based Android malware detection system for IoT devices has gradually increased. However, these learning-based detection models are often vulnerable to adversarial samples. An automated testing framework is needed to help these learning-based malware detection systems for IoT devices perform security analysis. The current methods of generating adversarial samples mostly require training parameters of models and most of the methods are aimed at image data. To solve this problem, we propose a testing framework for learning-based Android malware detection systems (TLAMD) for IoT Devices. The key challenge is how to construct a suitable fitness function to generate an effective adversarial sample without affecting the features of the application. By introducing genetic algorithms and some technical improvements, our test framework can generate adversarial samples for the IoT Android application with a success rate of nearly 100% and can perform black-box testing on the system.
The insider threats have always been one of the most severe challenges to cybersecurity. It can lead to the destruction of the organisation’s internal network system and information leakage, which seriously threaten the confidentiality, integrity and availability of data. To make matters worse, since the attacker has authorized access to the internal network, they can launch the attack from the inside and erase their attack trace, which makes it challenging to track and forensics. A blockchain traceability system for insider threats is proposed in this paper to mitigate the issue. First, this paper constructs an insider threat model of the internal network from a different perspective: insider attack forensics and prevent insider attacker from escaping. Then, we analyze why it is difficult to track attackers and obtain evidence when an insider threat has occurred. After that, the blockchain traceability system is designed in terms of data structure, transaction structure, block structure, consensus algorithm, data storage algorithm, and query algorithm, while using differential privacy to protect user privacy. We deployed this blockchain traceability system and conducted experiments, and the results show that it can achieve the goal of mitigating insider threats.
Despite the success of learning-based systems, recent studies have highlighted video adversarial examples as a ubiquitous threat to state-of-the-art video classification systems. Video adversarial attacks add subtle noise to the original example, resulting in a false classification result. Thorough studies on how to generate video adversarial examples are essential to prevent potential attacks. Despite much research on this, existing research works on the robustness of video adversarial examples are still limited. To generate highly robust video adversarial examples, we propose a video-augmentation-based adversarial attack (v3a), focusing on the video transformations to reinforce the attack. Further, we investigate different transformations as parts of the loss function to make the video adversarial examples more robust. The experiment results show that our proposed method outperforms other adversarial attacks in terms of robustness. We hope that our study encourages a deeper understanding of adversarial robustness in video classification systems with video augmentation.
Recent work has shown that deep neural networks are vulnerable to backdoor attacks. In comparison with the success of backdoor-attack methods, existing backdoor-defense methods face a lack of theoretical foundations and interpretable solutions. Most defense methods are based on experience with the characteristics of previous attacks, but fail to defend against new attacks. In this paper, we propose IBD, an interpretable backdoor-detection method via multivariate interactions. Using information theory techniques, IBD reveals how the backdoor works from the perspective of multivariate interactions of features. Based on the interpretable theorem, IBD enables defenders to detect backdoor models and poisoned examples without introducing additional information about the specific attack method. Experiments on widely used datasets and models show that IBD achieves a 78% increase in average in detection accuracy and an order-of-magnitude reduction in time cost compared with existing backdoor-detection methods.
The Internet has become the main channel of information communication, which contains a large amount of secret information. Although network communication provides a convenient channel for human communication, there is also a risk of information leakage. Traditional image steganography algorithms use manually crafted steganographic algorithms or custom models for steganography, while our approach uses ordinary OCR models for information embedding and extraction. Even if our OCR models for steganography are intercepted, it is difficult to find their relevance to steganography. We propose a novel steganography method for character-level text images based on adversarial attacks. We exploit the complexity and uniqueness of neural network boundaries and use neural networks as a tool for information embedding and extraction. We use an adversarial attack to embed the steganographic information into the character region of the image. To avoid detection by other OCR models, we optimize the generation of the adversarial samples and use a verification model to filter the generated steganographic images, which, in turn, ensures that the embedded information can only be recognized by our local model. The decoupling experiments show that the strategies we adopt to weaken the transferability can reduce the possibility of other OCR models recognizing the embedded information while ensuring the success rate of information embedding. Meanwhile, the perturbations we add to embed the information are acceptable. Finally, we explored the impact of different parameters on the algorithm with the potential of our steganography algorithm through parameter selection experiments. We also verify the effectiveness of our validation model to select the best steganographic images. The experiments show that our algorithm can achieve a 100% information embedding rate and more than 95% steganography success rate under the set condition of 3 samples per group. In addition, our embedded information can be hardly detected by other OCR models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.