The existing traditional image steganography methods often adopt the selection and mapping approaches. Among all the pixels of the cover image, only those which have the portability of incorporating the secret bits without noticeable distortion are chosen. This results to small integration capacity. In this paper, we propose a generic system of image steganography that uses the architecture of auto-encoding networks based on end to end trained deep Convolutional Neural Networks to ensure the process of concealment and extraction. The trained network includes two sub-networks, one for hiding used by the sender to encode a color image in another of the same size. The other for extraction, used by the recipient to retrieve the secret image from the received stego image. To validate our system, we carried out several tests on a range of challenging images dataset publicly available such as ImageNet, CIFAR10, LFW, PASCAL-VOC12. Results show that the proposed method is generic regardless the source of the images used and solves the problem of capacity at acceptable PSNR and SSIM values.
Numerous studies have used convolutional neural networks (CNNs) in the field of information concealment as well as steganalysis, achieving promising results in terms of capacity and invisibility. In this study, we propose a CNN-based steganographic model to hide a color image within another color image. The proposed model consists of two sub-networks: the hiding network is used by the sender to conceal the secret image; and the reveal network is used by the recipient to extract the secret image from the stego image. The architecture of the concealment sub-network is inspired by the U-Net auto-encoder and benefits from the advantages of the dilated convolution. The reveal sub-network is inspired by the auto-encoder architecture. To ensure the integrity of the hidden secret image, the model is trained end to end: rather than training separately, the two sub-networks are trained simultaneously a pair of networks. The loss function is elaborated in such a way that it favors the quality of the stego image over the secret image as the stego image is the one that comes under steganalysis attacks. To validate the proposed model, we carried out several tests on a range of challenging publicly available image datasets such as ImageNet, Labeled Faces in the Wild (LFW), and PASCAL-VOC12. Our results show that the proposed method can dissimulate an image into another one with the same size, reaching an embedding capacity of 24 bit per pixel without generating visual or structural artefacts on the host image. In addition, the proposed model is generic, that is, it does not depend on the image's size or the database source.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.