This paper proposes to investigate the impact of the channel model for authentication systems based on codes that are corrupted by a physically unclonable noise such as the one emitted by a printing process. The core of such a system for the receiver is to perform a statistical test in order to recognize and accept an original code corrupted by noise and reject any illegal copy or a counterfeit. This study highlights the fact that the probability of type I and type II errors can be better approximated, by several orders of magnitude, when using the Cramér-Chernoff theorem instead of a Gaussian approximation. The practical computation of these error probabilities is also possible using Monte Carlo simulations combined with the importance sampling method. By deriving the optimal test within a Neyman-Pearson setup, a first theoretical analysis shows that a thresholding of the received code induces a loss of performance. A second analysis proposes to find the best parameters of the channels involved in the model in order to maximize the authentication performance. This is possible not only when the opponent's channel is identical to the legitimate channel but also when the opponent's channel is different, leading this time to a min-max game between the two players. Finally, we evaluate the impact of an uncertainty for the receiver on the opponent channel, and we show that the authentication is still possible whenever the receiver can observe forged codes and uses them to estimate the parameters of the model.
This paper proposes to investigate the impact of the channel model for authentication systems based on codes that are corrupted by a physically unclonable noise such as the one emitted by a printing process. The core of such a system is the comparison for the receiver between an original binary code, an original corrupted code and a copy of the original code. We analyze two strategies, depending on whether or not the receiver use a binary version of its observation to perform its authentication test. By deriving the optimal test within a Neyman-Pearson setup, a theoretical analysis shows that a thresholding of the code induces a loss of performance. This study also highlights the fact that the probability of the type I and type II errors can be better approximated, by several orders of magnitude, computing Chernoff bounds instead of the Gaussian approximation. Finally we evaluate the impact of an uncertainty for the receiver on the opponent channel and show that the authentication is still possible whenever the receiver can observe forged codes and uses them to estimate the parameters of the model.
Microscopic analysis of paper printing shows regularly spaced dots whose random shape depends on the printing technology, the configuration of the printer as well as the paper properties. The modelling and identification of paper and ink interactions are required for qualifying the printing quality, for controlling the printing process and for application in authentication as well. This paper proposes an approach to identify the authentic printer source using micro-tags consisting of microscopic printed dots embedded in the documents. These random shape features are modelled and extracted as a signature for a particular printer. In the paper, we propose a probabilistic model consisting of vector parameters using a spatial interaction binary model with inhomogeneous Markov chain. These parameters determine the location and describe the diverse micro random structures of microscopic printed dots. A Markov chain Monte Carlo (MCMC) algorithm is thus developed to approximate the Minimum Mean Squared Error estimator. The performance is assessed through numerical simulations. The real printed dots from the common printing technologies (conventional offset, waterless offset, inkjet, laser) are used to assess the effectiveness of the model.
Localizing the tampered regions in forgery images is an important and challenging problem in forensics applications. Though there have been an extensive studies on image forgery localization over past decade, each method still has its own limitations. Therefore, it is promising to fuse different forensic approaches in order to obtain better localization performance. In this paper, we propose a framework to aggregate the decision maps of two forensic approaches: Photo Response Non-Uniformity (PRNU) based approach and statistical features based approach using Dempster-Shafer Theory. PRNU noise can be considered as a camera fingerprint thereby being used effectively to localize tampering images. However, the most challenging limitation of this approach is its false identifications on textured, saturated and dark regions. By combining with the statistical feature based approach, we can decrease this false alarm rate on saturated and dark regions. The extensive experimental results demonstrate that the proposed method significantly outperforms the single PRNU based approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.