Abstract-Letbe an by matrix ( ) which is an instance of a real random Gaussian ensemble. In compressed sensing we are interested in finding the sparsest solution to the system of equations x = y for a given y. In general, whenever the sparsity of x is smaller than half the dimension of y then with overwhelming probability over the sparsest solution is unique and can be found by an exhaustive search over x with an exponential time complexity for any y. The recent work of Candés, Donoho, and Tao shows that minimization of the 1 norm of x subject to x = y results in the sparsest solution provided the sparsity of x, say , is smaller than a certain threshold for a given number of measurements. Specifically, if the dimension of y approaches the dimension of x, the sparsity of x should be 0 239 N. Here, we consider the case where x is block sparse, i.e., x consists of = blocks where each block is of length and is either a zero vector or a nonzero vector (under nonzero vector we consider a vector that can have both, zero and nonzero components). Instead of 1 -norm relaxation, we consider the following relaxation: min x X 1 2 + X 2 2 + + X 2 subject to x = y (*) where X = (x ( 1) +1 x ( 1) +2 . . . x ) for = 1 2 . . . . Our main result is that as, (*) finds the sparsest solution to x = y, with overwhelming probability in , for any x whose sparsity is(1 2) ( ), provided 1 1 , and = (log(1 ) 3 ). The relaxation given in (*) can be solved in polynomial time using semi-definite programming.Index Terms-Compressed sensing, block-sparse signals, semidefinite programming.
We introduce a new family of error-correcting codes that have a polynomial-time encoder and a polynomial-time listdecoder, correcting a fraction of adversarial errors up towhere R is the rate of the code and M 1 is an arbitrary integer parameter. This makes it possible to decode beyond the Guruswami-Sudan radius of 1 − √ R for all rates less than 1/16. Stated another way, for any ε > 0, we can listdecode in polynomial time a fraction of errors up to 1 − ε with a code of length n and rate Ω ε/log(1/ε) , defined over an alphabet of size n M = n O(log(1/ε)) . Notably, this error-correction is achieved in the worst-case against adversarial errors: a probabilistic model for the error distribution is neither needed nor assumed. The best results so far for polynomial-time list-decoding of adversarial errors required a rate of O(ε 2 ) to achieve the correction radius of 1−ε.Our codes and list-decoders are based on two key ideas. The first is the transition from bivariate polynomial interpolation, pioneered by Sudan and Guruswami-Sudan [12,22], to multivariate interpolation decoding. The second idea is to part ways with Reed-Solomon codes, for which numerous prior attempts [2,3,12,18] at breaking the O(ε 2 ) rate barrier in the worst-case were unsuccessful. Rather than devising a better list-decoder for Reed-Solomon codes, we devise better codes. Standard Reed-Solomon encoders view a message as a polynomial f (X) over a field F q , and produce the corresponding codeword by evaluating f (X) at n distinct elements of F q . Herein, given f (X), we first compute one or more related polynomials g 1 (X), g 2 (X), . . . , g M−1 (X) and produce the corresponding codeword by evaluating all these polynomials. Correlation between f (X) and g i (X), carefully designed into our encoder, then provides the additional information we need to recover the encoded message from the output of the multivariate interpolation process.
Abstract-Microarrays (DNA, protein, etc.) are massively parallel affinity-based biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. In conventional microarrays, each spot contains a large number of copies of a single probe designed to capture a single target, and, hence, collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Typically, only a fraction of the total number of genes represented by the two samples is differentially expressed, and, thus, a vast number of probe spots may not provide any useful information. To this end, we propose an alternative design, the so-called compressed microarrays, wherein each spot contains copies of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. For sparse measurement matrices, we propose an algorithm that has significantly lower computational complexity than the widely used linear-programming-based methods, and can also recover signals with less sparsity.
We consider the deterministic construction of a measurement matrix and a recovery method for signals that are block sparse. A signal that has dimension N = nd, which consists of n blocks of size d, is called (s, d)-block sparse if only s blocks out of n are nonzero. We construct an explicit linear mapping Φ that maps the (s, d)-block sparse signal to a measurement vector of dimension M , where− o(1). We show that if the (s, d)-block sparse signal is chosen uniformly at random then the signal can almost surely be reconstructed from the measurement vector in O(N 3 ) computations.
A soft-decision decoding algorithm for ReedSolomon codes was recently proposed in [2]. This algorithm converts probabilities observed at the channel output into algebraic interpolation conditions, specified in terms of a multiplicity matrix M. Koetter-Vardy [2] show that the probability of decoding failure is given by h { S M 5 A ( M ) } , where SM is a random variable and A ( M ) is a known function of M . They then compute the multiplicity matrix M that maximizes the e q e c t e d value of SM. Here, we attempt to directly minimize the overall probability of decoding failure PT{SM < A ( M ) } . First, we recast this optimization problem into a geometrical framework. Using this framework, we derive a simple modification to the KV algorithm which results in a provably better multiplicity assignment. Alternatively, we approximate the distribution of SM by a Gaussian distribution, and develop an iterative algorithm to minimize P r { s M 5 A ( M ) } under this approximation. This leads to coding gains of about 0.20dB for RS codes of length 255 and up to 0.75dB for RS codes of length 15, as compared to the Koetter-Vardy algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.