2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00161
|View full text |Cite
|
Sign up to set email alerts
|

Light Field Messaging With Deep Photographic Steganography

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
63
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 113 publications
(63 citation statements)
references
References 31 publications
0
63
0
Order By: Relevance
“…After that, RedMark [49] uses two Fully Convolutional Neural Networks (FCNs) with residual connections to embed watermarks in the frequency domain without adversarial training. Different from the dependent deep hiding methods (DDH) [8], [14]- [17], which adapt the watermark to the original cover image, UDH [50] proposes a universal deep hiding method to embed the watermark independent of the cover image. These existing works have demonstrated a variety of neural network structures to effectively realize the message embedding and extraction, ensuring that the watermarked image and the cover image have little or even no perceptual differences.…”
Section: A Digital Image Watermarkingmentioning
confidence: 99%
See 3 more Smart Citations
“…After that, RedMark [49] uses two Fully Convolutional Neural Networks (FCNs) with residual connections to embed watermarks in the frequency domain without adversarial training. Different from the dependent deep hiding methods (DDH) [8], [14]- [17], which adapt the watermark to the original cover image, UDH [50] proposes a universal deep hiding method to embed the watermark independent of the cover image. These existing works have demonstrated a variety of neural network structures to effectively realize the message embedding and extraction, ensuring that the watermarked image and the cover image have little or even no perceptual differences.…”
Section: A Digital Image Watermarkingmentioning
confidence: 99%
“…For non-differentiable distortions including JPEG compression, some methods [7], [8], [49] turn to simulate them with a differentiable approximation, allowing the network to be trained in an end-to-end style. In [9] and [14], some distortions are generated by a trained CNN instead of explicitly modeling distortions from a fixed pool during training, which is another way to deal with non-differentiable and hard modeled distortions. In addition, Liu et al [51] design a redundant two-stage separable deep learning framework to address the problems in one-stage end-to-end training, such as image quality degradation and difficulty to simulate noise attacks using differentiable layers.…”
Section: A Digital Image Watermarkingmentioning
confidence: 99%
See 2 more Smart Citations
“…HiDDeN [Zhu et al 2018] augments messages by applying color distortions and noise to the encoded image, assuming perfect alignment during the decoding process without any spatial transformations. Wengrowski and Dana [2019] capture a dataset to model a camera-display transfer function in order to add distortions to input images. However, none of these methods account for the localization problem of encoded messages in input images.…”
Section: Related Workmentioning
confidence: 99%