The secrecy capacity of the type II wiretap channel (WTC II) with a noisy main channel is currently an open problem. Herein its secrecy-capacity is derived and shown to be equal to its semantic-security (SS) capacity. In this setting, the legitimate users communicate via a discrete-memoryless (DM) channel in the presence of an eavesdropper that has perfect access to a subset of its choosing of the transmitted symbols, constrained to a fixed fraction of the blocklength. The secrecy criterion is achieved simultaneously for all possible eavesdropper subset choices. The SS criterion demands negligible mutual information between the message and the eavesdropper's observations even when maximized over all message distributions.A key tool for the achievability proof is a novel and stronger version of Wyner's soft covering lemma. Specifically, a random codebook is shown to achieve the soft-covering phenomenon with high probability. The probability of failure is doublyexponentially small in the blocklength. Since the combined number of messages and subsets grows only exponentially with the blocklength, SS for the WTC II is established by using the union bound and invoking the stronger soft-covering lemma. The direct proof shows that rates up to the weak-secrecy capacity of the classic WTC with a DM erasure channel (EC) to the eavesdropper are achievable. The converse follows by establishing the capacity of this DM wiretap EC as an upper bound for the WTC II. From a broader perspective, the stronger soft-covering lemma constitutes a tool for showing the existence of codebooks that satisfy exponentially many constraints, a beneficial ability for many other applications in information theoretic security.Index Terms-Erasure wiretap channel, information theoretic security, semantic-security, soft-covering lemma, wiretap channel of type II.
We study the state-dependent (SD) wiretap channel (WTC) with non-causal channel state information (CSI) at the encoder. This model subsumes all other instances of CSI availability as special cases, and calls for an efficient utilization of the state sequence both for reliability and security purposes. A lower bound on the secrecy-capacity, that improves upon the previously best known result by Chen and Han Vinck, is derived based on a novel superposition coding scheme. An example in which the proposed scheme achieves strictly higher rates is provided. Specializing the lower bound to the case where CSI is also available to the decoder reveals that the lower bound is at least as good as the achievable formula by Chia and El-Gamal, which is already known to outperform the adaptation of the Chen and Han Vinck code to the encoder and decoder CSI scenario. Our achievability gives rise to the exact secrecy-capacity characterization of a class of SD-WTCs that decompose into a product of two WTCs, one is independent of the state and the other one depends only on it. The results are derived under the strict semantic-security metric that requires negligible information leakage for all message distributions. The proof of achievability relies on a stronger version of the soft-covering lemma for superposition codes.
This paper studies convergence of empirical measures smoothed by a Gaussian kernel. Specifically, consider approximating P * Nσ, for Nσ N (0, σ 2 I d ), byPn * Nσ, wherePn is the empirical measure, under different statistical distances. The convergence is examined in terms of the Wasserstein distance, total variation (TV), Kullback-Leibler (KL) divergence, and χ 2 -divergence. We show that the approximation error under the TV distance and 1-Wasserstein distance (W1) converges at the rate e O(d) n − 1 2 in remarkable contrast to a (typical) n − 1 d rate for unsmoothed W1(and d ≥ 3). Similarly, for the KL divergence, squared 2-Wasserstein distance (W 2 2 ), and χ 2 -divergence convergence rate is e O(d) n −1 , but only provided that P achieves finite input-output χ 2 mutual information across the additive white Gaussian noise (AWGN) channel. If the latter condition is not met, the rate changes to ω n −1 for the KL divergence and W 2 2 , while the χ 2 -divergence becomes infinite -a curious dichotomy. As a main application we consider estimating the differential entropy h(S + Z), where S ∼ P and Z ∼ Nσ are independent d-dimensional random variables. The distribution P is unknown and belongs to some nonparametric class, but n independently and identically distributed (i.i.d) samples from it are available. Despite the regularizing effect of noise, we first show that any good estimator (within an additive gap) for this problem must have a sample complexity that is exponential in d. We then leverage the empirical approximation results to show that the absolute-error risk of the plug-in estimator converges as e O(d) n − 1 2 , thus attaining the parametric rate. This establishes the plug-in estimator as minimax rate-optimal for the considered problem, with sharp dependence of the convergence rate both on n and on d. We provide numerical results comparing the performance of the plug-in estimator to that of general-purpose (unstructured) differential entropy estimators (based on kernel density estimation (KDE) or k nearest neighbors (kNN) techniques) applied to samples of S +Z. These results reveal a significant empirical superiority of the plug-in to stateof-the-art KDE and kNN methods. As a motivating utilization of the plug-in approach, we estimate information flows in deep neural networks and discuss Tishby's Information Bottleneck and the compression conjecture, among others.1 n n i=1 δ Si is the empirical measure. 1 Due to the popularity of the additive Gaussian noise model, we start by exploring this smoothed empirical approximation problem in detail, under several additional statistical distances. A. Convergence of Empirical Measures Smoothed by a Gaussian KernelConsider the empirical approximation error Eδ(P S n * N σ , P * N σ ) under some statistical distance δ. Various choices of δ are considered, such as the 1-Wasserstein and (squared) 2-Wasserstein distances, total variation (TV), Kullback-Leibler (KL) divergence, and χ 2 -divergence. We show that, when P is subgaussian, the approximation error under the 1-Was...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.