We consider the problem of detecting a small subset of defective items from a large set via non-adaptive "random pooling" group tests. We consider both the case when the measurements are noiseless, and the case2 when the measurements are noisy (the outcome of each group test may be independently faulty with probability q). Order-optimal results for these scenarios are known in the literature. We give information-theoretic lower bounds on the query complexity of these problems, and provide corresponding computationally efficient algorithms that match the lower bounds up to a constant factor. To the best of our knowledge this work is the first to explicitly estimate such a constant that characterizes the gap between the upper and lower bounds for these problems.
1 . We present computationally efficient and provably correct algorithms with near-optimal sample-complexity for noisy non-adaptive group testing. Group testing involves grouping arbitrary subsets of items into pools. Each pool is then tested to identify the defective items, which are usually assumed to be sparsely distributed. We consider random non-adaptive pooling where pools are selected randomly and independently of the test outcomes. Our noisy scenario accounts for both false negatives and false positives for the test outcomes. Inspired by compressive sensing algorithms we introduce four novel computationally efficient decoding algorithms for group testing, CBP via Linear Programming (CBP-LP), NCBP-LP (Noisy CBP-LP), and the two related algorithms NCBP-SLP+ and NCBP-SLP-("Simple" NCBP-LP). The first of these algorithms deals with the noiseless measurement scenario, and the next three with the noisy measurement scenario. We derive explicit sample-complexity bounds-with all constants made explicit-for these algorithms as a function of the desired error probability; the noise parameters; the number of items; and the size of the defective set (or an upper bound on it). We show that the samplecomplexities of our algorithms are near-optimal with respect to known information-theoretic bounds.
We consider a new group testing model wherein each item is a binary random variable defined by an a priori probability of being defective. We assume that each probability is small and that items are independent, but not necessarily identically distributed. The goal of group testing algorithms is to identify with high probability the subset of defectives via non-linear (disjunctive) binary measurements. Our main contributions are two classes of algorithms: (1) adaptive algorithms with tests based either on a maximum entropy principle, or on a ShannonFano/Huffman code; (2) non-adaptive algorithms. Under loose assumptions and with high probability, our algorithms only need a number of measurements that is close to the informationtheoretic lower bound, up to an explicitly-calculated universal constant factor.
Efficient deterministic algorithms are proposed with logarithmic step complexities for the generation of entangled GHZN and WN states useful for quantum networks, and an implementation on the IBM quantum computer up to N=16 is demonstrated. Improved quality is then investigated using full quantum tomography for low‐N GHZ and W states. This is completed by parity oscillations and histogram distance for large‐N GHZ and W states, respectively. Robust states are built with about twice the number of quantum bits which were previously achieved.
Abstract-We consider some computationally efficient and provably correct algorithms with near-optimal sample-complexity for the problem of noisy non-adaptive group testing. Group testing involves grouping arbitrary subsets of items into pools. Each pool is then tested to identify the defective items, which are usually assumed to be "sparse". We consider non-adaptive randomly pooling measurements, where pools are selected randomly and independently of the test outcomes. We also consider a model where noisy measurements allow for both some false negative and some false positive test outcomes (and also allow for asymmetric noise, and activation noise). We consider three classes of algorithms for the group testing problem (we call them specifically the "Coupon Collector Algorithm", the "Column Matching Algorithms", and the "LP Decoding Algorithms" -the last two classes of algorithms (versions of some of which had been considered before in the literature) were inspired by corresponding algorithms in the Compressive Sensing literature. The second and third of these algorithms have several flavours, dealing separately with the noiseless and noisy measurement scenarios. Our contribution is novel analysis to derive explicit sample-complexity bounds -with all constants expressly computed -for these algorithms as a function of the desired error probability; the noise parameters; the number of items; and the size of the defective set (or an upper bound on it). We also compare the bounds to information-theoretic lower bounds for sample complexity based on Fano's inequality and show that the upper and lower bounds are equal up to an explicitly computable universal constant factor (independent of problem parameters).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.