2020
DOI: 10.30534/ijatcse/2020/75942020
|View full text |Cite
|
Sign up to set email alerts
|

Image Steganography by Deep CNN Auto-Encoder Networks

Abstract: The existing traditional image steganography methods often adopt the selection and mapping approaches. Among all the pixels of the cover image, only those which have the portability of incorporating the secret bits without noticeable distortion are chosen. This results to small integration capacity. In this paper, we propose a generic system of image steganography that uses the architecture of auto-encoding networks based on end to end trained deep Convolutional Neural Networks to ensure the process of conceal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 25 publications
0
9
0
Order By: Relevance
“…For the first experiments, we trained our model on images with different sizes; the images were randomly selected and then resized from the ImageNet dataset. The results were then compared with our previous method cited in [7]. To accomplish the comparison, we first test the proposed method using the same loss function used in the previous model [7]; we call it "Proposed L 2 +L 2 ".…”
Section: Experiments Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…For the first experiments, we trained our model on images with different sizes; the images were randomly selected and then resized from the ImageNet dataset. The results were then compared with our previous method cited in [7]. To accomplish the comparison, we first test the proposed method using the same loss function used in the previous model [7]; we call it "Proposed L 2 +L 2 ".…”
Section: Experiments Resultsmentioning
confidence: 99%
“…The results were then compared with our previous method cited in [7]. To accomplish the comparison, we first test the proposed method using the same loss function used in the previous model [7]; we call it "Proposed L 2 +L 2 ". Then, we use the loss function proposed in this article (see Section 3.4); we call it "Proposed L 1 + V + L 2 ".…”
Section: Experiments Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The image patches are cropped to 224 throughout the learning process. A simple and effective learning rate of 0.001 is used from formula [17], [19]- [20], with drop factor of 0.3. mini batch size of 8 is employed, this is maintained for both the upsampling and downsampling respectively. Our architecture is optimized using stochastic gradient descent with momentum of technique of 0.9, where best hyper-parameters are selected.…”
Section: Corollarymentioning
confidence: 99%
“…Comparison of our proposed model with conventional method on the Food-101 dataset[20]. The best results are bolded.…”
mentioning
confidence: 99%