Suppose x is any exactly k-sparse vector in n . We present a class of "sparse" matrices A, and a corresponding algorithm that we call SHO-FA (for Short and Fast 1 ) that, with high probability over A, can reconstruct x from Ax. The SHO-FA algorithm is related to the Invertible Bloom Lookup Tables (IBLTs) recently introduced by Goodrich et al., with two important distinctions -SHO-FA relies on linear measurements, and is robust to noise and approximate sparsity. The SHO-FA algorithm is the first to simultaneously have the following properties: (a) it requires only O(k) measurements, (b) the bitprecision of each measurement and each arithmetic operation is O (log(n) + P ) (here 2 −P corresponds to the desired relative error in the reconstruction of x), (c) the computational complexity of decoding is O(k) arithmetic operations, and (d) if the reconstruction goal is simply to recover a single component of x instead of all of x, with high probability over A this can be done in constant time. All constants above are independent of all problem parameters other than the desired probability of success. For a wide range of parameters these properties are information-theoretically order-optimal. In addition, our SHO-FA algorithm is robust to random noise, and (random) approximate sparsity for a large range of k. In particular, suppose the measured vector equals A(x + z) + e, where z and e correspond respectively to the source tail and measurement noise. Under reasonable statistical assumptions on z and e our decoding algorithm reconstructs x with an estimation error of O(||z||1 + (log k) 2 ||e||1). The SHO-FA algorithm works with high probability over A, z, and e, and still requires only O(k) steps and O(k) measurements over O(log(n))-bit numbers. This is in contrast to most existing algorithms which focus on the "worst-case" z model, where it is known Ω(k log(n/k)) measurements over O(log(n))-bit numbers are necessary.