Using Natural Steganography (NS), a cover raw image acquired at sensitivity ISO 1 is transformed into a stego image whose statistical distribution is similar to a cover image acquired at sensitivity ISO 2 > ISO 1. This paper proposes such an embedding scheme for color sensors in the JPEG domain, extending thus the prior art proposed for the pixel domain and the JPEG domain for monochrome sensors. We first show that color sensors generate strong intra-block and inter-block dependencies between DCT coefficients and that theses dependencies are due to the demosaicking step in the development process. Capturing theses dependencies using an empirical covariance matrix, we propose a pseudoembedding algorithm on greyscale JPEG images which uses up to four sub-lattices and 64 lattices to embed information while preserving the estimated correlations among DCT coefficients. We then compute an approximation of the average embedding rate w.r.t. the JPEG quality factor and evaluate the empirical security of the proposed scheme for linear and non-linear demosaicking schemes. Our experiments show that we can achieve high capacity (around 2 bit per nzAC) with a high empirical security (P E 30% using DCTR at QF 95).
In order to achieve high practical security, Natural Steganography (NS) uses cover images captured at ISO sensitivity ISO 1 and generates stego images mimicking ISO sensitivity ISO 2 > ISO 1 . This is achieved by adding a stego signal to the cover that mimics the sensor photonic noise. This paper proposes an embedding mechanism to perform NS in the JPEG domain after linear developments by explicitly computing the correlations between DCT coefficients before quantization. In order to compute the covariance matrix of the photonic noise in the DCT domain, we first develop the matrix representation of demosaicking, luminance averaging, pixel section, and 2D-DCT. A detailed analysis of the resulting covariance matrix is done in order to explain the origins of the correlations between the coefficients of 3 × 3 DCT blocks. An embedding scheme is then presented that takes in order to take into account all the correlations. It employs 4 sub-lattices and 64 lattices per sub-lattices. The modification probabilities of each DCT coefficient are then derived by computing conditional probabilities from the multivariate Gaussian distribution using the Cholesky decomposition of the covariance matrix. This derivation is also used to compute the embedding capacity of each image. Using a specific database called E1 Base, we show that in the JPEG domain NS (J-Cov-NS) enables to achieve high capacity (more than 2 bits per non-zero AC DCT) and with high practical security (P E 40% using DCTR from QF 75 to QF 100).
IEMN-DHS UMRCNRS X520. ENlC CitC Scicntifcquc -Rue Gugliclnio Mat-coni. 50650 Villcneuvc d'Ascq. Frailcc 4 I, Bd. Vaukin. 5W46 Lillc ~IEMN-DHS UMRCNRS. ISEN A b r n .~~~r -R e c e~i t l~. nirelcss technologies have known an incrc;iring succcss. rhos becoming an interesting field for study. O n e nf tlie topics o f r rch are multiple ante!ina arrays. Signal rreowr! in that rase is not obvious. Different methods with different performances have been suggestcd. F r o m il statistical analysis of wrvrs a t the outpiit o f Zero-Forcing, we have constructed a method. which w c present in this article. that combines Zero-Fvrcing and hlarimum Likelihood. We will demonstrate that it provides bath a lo\% complexity and nearly aplimal results regarding Binary Error Rare (BER). Furthermore. w e will show its rele\.;tnr> compared to t h e YBLAST algorithm.
This paper proposes to use the statistical analysis of the correlations between DCT coefficients to design a new synchronization strategy that can be used for cost-based steganographic schemes in the JPEG domain. First, an analysis applied on the photonic noise is performed on the covariance matrix of DCT coefficients of neighboring blocks after a development pipeline similar to the one used to generate BossBase. This analysis exhibits (i) a decomposition into 8 disjoint sets of uncorrelated coefficients (4 sets per block used by 2 disjoint lattices) and (ii) the fact that each DCT coefficient is correlated with 38 other coefficients belonging either to the same block or to connected blocks. Using the uncorrelated groups, an embedding scheme can be designed using only 8 disjoint lattices. The proposed embedding scheme relies on the following ingredients. Firstly, we convert the empirical costs associated to one each coefficient into a Gaussian distribution whose variance is directly computed from the embedding costs. Secondly we derive conditional Gaussian distributions from a multivariate distribution considering only the correlated coefficients which have been already modified by the embedding scheme. This covariance matrix takes into account both the correlations exhibited by the analysis of the covariance matrix and the variance derived from the costs. This synchronization scheme enables to obtain a gain of of at least 7% at 95 for an embedding rate close to 0.3 bnzac coefficient using DCTR feature sets for both UERD and J-Uniward. CCS CONCEPTS • Security and privacy → Domain-specific security and privacy architectures; Intrusion/anomaly detection and malware mitigation; Malware and its mitigation;
Authentication of printed documents using high resolution 2D codes relies on the fact that the printing process is considered as a Physical Unclonable Function used to guaranty the security of the authentication system. The 2D code is corrupted by the printing process in a non-invertible way by inducing decoding errors, and the gap between the bit error rate generated after the first and second printing processes enables to perform the authentication of the document. In this context, the adversary's goal is to minimize the amount of decoding errors obtained from the printed code in order to generate a forgery which can be considered as original. The goal of this paper is to maximize the decoding performance of the adversary by inferring the original code observing the printed one. After presenting the different kinds of features that can be derided from the 2D code (the scanner outputs, statistical moments, features derived from Principal Component Analysis and Partial Least Squares), we present the different classifiers that have been evaluated and show that the bit error rate decreases from 32% using the baseline decoding to 22% using appropriated features and classifiers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.