The fundamental task of group testing is to recover a small distinguished subset of items from a large population while efficiently reducing the total number of tests (measurements). The key contribution of this paper is in adopting a new information-theoretic perspective on group testing problems. We formulate the group testing problem as a channel coding/decoding problem and derive a single-letter characterization for the total number of tests used to identify the defective set. Although the focus of this paper is primarily on group testing, our main result is generally applicable to other compressive sensing models.The single letter characterization is shown to be order-wise tight for many interesting noisy group testing scenarios. Specifically, we consider an additive Bernoulli(q) noise model where we show that, for N items and K defectives, the number of tests T is O K log N 1−q for arbitrarily small average error probability and O K 2 log N 1−q for a worst case error criterion. We also consider dilution effects whereby a defective item in a positive pool might get diluted with probability u and potentially missed. In this case, it is shown that T is Ofor the average and the worst case error criteria, respectively. Furthermore, our bounds allow us to verify existing known bounds for noiseless group testing including the deterministic noise-free case and approximate reconstruction with bounded distortion. Our proof of achievability is based on random coding and the analysis of a Maximum Likelihood Detector, and our information theoretic lower bound is based on Fano's inequality.
Abstract-The problem of multiple hypothesis testing with observation control is considered in both fixed sample size and sequential settings. In the fixed sample size setting, for binary hypothesis testing, the optimal exponent for the maximal error probability corresponds to the maximum Chernoff information over the choice of controls, and a pure stationary open-loop control policy is asymptotically optimal within the larger class of all causal control policies. For multihypothesis testing in the fixed sample size setting, lower and upper bounds on the optimal error exponent are derived. It is also shown through an example with three hypotheses that the optimal causal control policy can be strictly better than the optimal open-loop control policy. In the sequential setting, a test based on earlier work by Chernoff for binary hypothesis testing, is shown to be first-order asymptotically optimal for multihypothesis testing in a strong sense, using the notion of decision making risk in place of the overall probability of error. Another test is also designed to meet hard risk constrains while retaining asymptotic optimality. The role of past information and randomization in designing optimal control policies is discussed.
This paper presents a remarkably simple, yet powerful, algorithm termed Coherence Pursuit (CoP) to robust Principal Component Analysis (PCA). As inliers lie in a low dimensional subspace and are mostly correlated, an inlier is likely to have strong mutual coherence with a large number of data points. By contrast, outliers either do not admit low dimensional structures or form small clusters. In either case, an outlier is unlikely to bear strong resemblance to a large number of data points. Given that, CoP sets an outlier apart from an inlier by comparing their coherence with the rest of the data points. The mutual coherences are computed by forming the Gram matrix of the normalized data points. Subsequently, the sought subspace is recovered from the span of the subset of the data points that exhibit strong coherence with the rest of the data. As CoP only involves one simple matrix multiplication, it is significantly faster than the state-of-the-art robust PCA algorithms. We derive analytical performance guarantees for CoP under different models for the distributions of inliers and outliers in both noise-free and noisy settings. CoP is the first robust PCA algorithm that is simultaneously non-iterative, provably robust to both unstructured and structured outliers, and can tolerate a large number of unstructured outliers
The group velocity of 'space-time' wave packets -propagation-invariant pulsed beams endowed with tight spatio-temporal spectral correlations -can take on arbitrary values in free space. Here we investigate theoretically and experimentally the maximum achievable group delay that realistic finite-energy space-time wave packets can achieve with respect to a reference pulse traveling at the speed of light. We find that this delay is determined solely by the spectral uncertainty in the association between the spatial frequencies and wavelengths underlying the wave packet spatiotemporal spectrum -and not by the beam size, bandwidth, or pulse width. We show experimentally that the propagation of space-time wave packets is delimited by a spectral-uncertainty-induced 'pilot envelope' that travels at a group velocity equal to the speed of light in vacuum. Temporal walkoff between the space-time wave packet and the pilot envelope limits the maximum achievable differential group delay to the width of the pilot envelope. Within this pilot envelope the spacetime wave packet can locally travel at an arbitrary group velocity and yet not violate relativistic causality because the leading or trailing edge of superluminal and subluminal space-time wave packets, respectively, are suppressed once they reach the envelope edge. Using pulses of width ∼ 4 ps and a spectral uncertainty of ∼ 20 pm, we measure maximum differential group delays of approximately ±150 ps, which exceed previously reported measurements by at least three orders of magnitude.
In this paper we study the problem of tracking an object moving randomly through a network of wireless sensors. Our objective is to devise strategies for scheduling the sensors to optimize the tradeoff between tracking performance and energy consumption. We cast the scheduling problem as a Partially Observable Markov Decision Process (POMDP), where the control actions correspond to the set of sensors to activate at each time step. Using a bottom-up approach, we consider different sensing, motion and cost models with increasing levels of difficulty. At the first level, the sensing regions of the different sensors do not overlap and the target is only observed within the sensing range of an active sensor. Then, we consider sensors with overlapping sensing range such that the tracking error, and hence the actions of the different sensors, are tightly coupled. Finally, we consider scenarios wherein the target locations and sensors' observations assume values on continuous spaces. Exact solutions are generally intractable even for the simplest models due to the dimensionality of the information and action spaces. Hence, we devise approximate solution techniques, and in some cases derive lower bounds on the optimal tradeoff curves. The generated scheduling policies, albeit suboptimal, often provide close-to-optimal energy-tracking tradeoffs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.