We present a general approach to rounding semidefinite programming relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our approach is based on using the connection between these relaxations and the Sum-ofSquares proof system to transform a combining algorithman algorithm that maps a distribution over solutions into a (possibly weaker) solution-into a rounding algorithm that maps a solution of the relaxation to a solution of the original problem.Using this approach, we obtain algorithms that yield improved results for natural variants of several well-known problems:1. We give a quasipolynomial-time algorithm that approximates max x 2 =1 P (x) within an additive factor of ε P spectral , where ε > 0 is a constant, P is a degree d = O(1), n-variate polynomial with nonnegative coefficients, and P spectral is the spectral norm of a matrix corresponding to P 's coefficients. Beyond being of interest in its own right, obtaining such an approximation for general polynomials (with possibly negative coefficients) is a long-standing open question in quantum information theory, and our techniques have already led to improved results in this area (Brandão and Harrow, STOC '13).2. We give a polynomial-time algorithm that, given a subspace V ⊆ R n of dimension d that (almost) contains the characteristic function of a set of size n/k, finds a vector v ∈ V that satisfies Ei v 4i Ω(d −1/3 k(Ei v 2 i ) 2 ). This is a natural analytical relaxation of the problem of finding the sparsest element in a subspace, and it is also motivated by a connection to the Small-Set Expansion problem shown by Barak et al.of the previous best known algorithms for small-set expansion in a certain range of parameters.3. We use this notion of L4 vs. L2 sparsity to obtain a polynomial-time algorithm with substantially improved guarantees for recovering a planted sparse vector v in a random d-dimensional subspace of R n . If v has µn nonzero coordinates, we can recover it with high probability whenever µ O(min(1, n/d 2 )). In particular, when d √ n, this recovers a planted vector with up to Ω(n) nonzero coordinates. When d n 2/3 , our algorithm improves upon existing methods based on comparing the L1 and L∞ norms, which intrinsically require µ O 1/ √ d . We also show how this notion of L4 vs. L2 sparsity can be used to find a planted sparse vector in a random subspace, improving on a recent result of Demanet and Hand (2013).