It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision and can cause the deep models misbehave. Such phenomenon may lead to severely inestimable consequences in the safety and security critical applications. Existing defenses are trend to harden the robustness of models against adversarial attacks, for example, adversarial training technology. However, these are usually intractable to implement due to the high cost of retraining and the cumbersome operations of altering the model architecture or parameters. In this paper, we discuss the saliency map method from the view of enhancing model interpretability, it is similar to introducing the mechanism of the attention to the model, so as to comprehend the progress of object identification by the deep networks. We then propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples. Our experimental results of some representative adversarial attacks on common data sets including ImageNet and popular models show that our method can detect all the attacks with high detection success rate effectively. We compare it with the existing state‐of‐the‐art technique, and the experiments indicate that our method is more general.
Steganography is a collection of techniques for concealing the existence of information by embedding it within a cover. With the development of deep learning, some novel steganography methods have appeared based on the autoencoder or generative adversarial networks. While the deep learning based steganography methods have the advantages of automatic generation and capacity, the security of the algorithm needs to improve. In this paper, we take advantage of the linear behavior of deep learning networks in higher space and propose a novel steganography scheme which enhances the security by adversarial example. The system is trained with different training settings on two datasets. The experiment results show that the proposed scheme could escape from deep learning steganalyzer detection. Besides, the produced stego could extract secret image with less distortion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.