2022
DOI: 10.1109/tpami.2021.3114555
|View full text |Cite
|
Sign up to set email alerts
|

Seek-and-Hide: Adversarial Steganography via Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
9
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 41 publications
1
9
0
Order By: Relevance
“…Based on this analysis it can be observed that the proposed model is 6% more effective than DRL [4], 4.5% more effective than BMSE [9], and 9% more effective than GAN [11] under different test data set, which can also be observed from figure 5, wherein accuracy values are visualized. The results of PSNR are shown in figure 6, wherein PSNR of the proposed model is compared with other reference models.…”
Section: Visualmentioning
confidence: 61%
See 3 more Smart Citations
“…Based on this analysis it can be observed that the proposed model is 6% more effective than DRL [4], 4.5% more effective than BMSE [9], and 9% more effective than GAN [11] under different test data set, which can also be observed from figure 5, wherein accuracy values are visualized. The results of PSNR are shown in figure 6, wherein PSNR of the proposed model is compared with other reference models.…”
Section: Visualmentioning
confidence: 61%
“…Where, 𝐼 π‘œπ‘Ÿπ‘–π‘” &𝐼 π‘‘π‘’π‘π‘œπ‘‘π‘’π‘‘ represents original decoded inputs, while 𝑑 π‘π‘œπ‘šπ‘π‘™π‘’π‘‘π‘’ &𝑑 π‘ π‘‘π‘Žπ‘Ÿπ‘‘ represents completion and start timestamps for the steganographic operations with 𝑁 number of values per input sets. As per these evaluations, the MMSE was compared with DRL [4], BMSE [9], and GAN [11] w.r.t. Test Data Samples (TDS) in table 1 as shown.…”
Section: β–‘ 4 Results and Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…To maximise the benefits from a simulated steganalytic environment, an agent in SPAR-RL used a policy network to break down the embedding process into pixel-wise actions, while the environment used an environment network to award rewards to individual pixels. An adaptive local image steganography (AdaSteg) method was presented by Pan et al (Pan et al, 2022) to enable image steganography that is both scale-and location-adaptive. The suggested technique increased the security of steganography by adaptively hiding the secret on a local scale, and it also made it possible to conceal several secrets under a single cover.…”
Section: Reinforcement Learning Based Steganographic Algorithmsmentioning
confidence: 99%