Extracting correlation features between codes-words with high computational efficiency is crucial to steganalysis of Voice over IP (VoIP) streams. In this paper, we utilized attention mechanisms, which have recently attracted enormous interests due to their highly parallelizable computation and flexibility in modeling correlation in sequence, to tackle steganalysis problem of Quantization Index Modulation (QIM) based steganography in compressed VoIP stream. We design a light-weight neural network named Fast Correlation Extract Model (FCEM) only based on a variant of attention called multi-head attention to extract correlation features from VoIP frames. Despite its simple form, FCEM outperforms complicated Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) models on both prediction accuracy and time efficiency. It significantly improves the best result in detecting both low embedded rates and short samples recently. Besides, the proposed model accelerates the detection speed as twice as before when the sample length is as short as 0.1s, making it a excellent method for online services.
With the rapid development of Natural Language Processing (NLP) technologies, text steganography methods have been significantly innovated recently, which poses a great threat to cybersecurity. In this paper, we propose a novel attentional LSTM-CNN model to tackle the text steganalysis problem. The proposed method firstly maps words into semantic space for better exploitation of the semantic feature in texts and then utilizes a combination of Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) recurrent neural networks to capture both local and long-distance contextual information in steganography texts. In addition, we apply attention mechanism to recognize and attend to important clues within suspicious sentences. After merge feature clues from Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), we use a softmax layer to categorize the input text as cover or stego. Experiments showed that our model can achieve the state-of-art result in the text steganalysis task.
With the Volume of Voice over IP (VoIP) traffic rises shapely, more and more VoIP-based steganography methods have emerged in recent years, which poses a great threat to the security of cyberspace. Low bit-rate speech codecs are widely used in the VoIP application due to its powerful compression capability. QIM steganography makes it possible to hide secret information in VoIP streams. Previous research mostly focus on capturing the inter-frame correlation or inner-frame correlation features in code-words but ignore the hierarchical structure which exists in speech frame. In this paper, motivated by the complex multiscale structure, we design a Hierarchical Representation Network to tackle the steganalysis of QIM steganography in low-bitrate speech signal. In the proposed model, Convolution Neural Network (CNN) is used to model the hierarchical structure in the speech frame, and three level of attention mechanisms are applied at different convolution block, enabling it to attend differentially to more and less important content in speech frame. Experiments demonstrated that the steganalysis performance of the proposed method can outperforms the state-of-the-art methods especially in detecting both short and low embeded speech samples. Moreover, our model needs less computation and has higher time efficiency to be applied to real online services.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.