Developing computationally-efficient codes that approach the Shannon-theoretic limits for communication and compression has long been one of the major goals of information and coding theory. There have been significant advances towards this goal in the last couple of decades, with the emergence of turbo codes, sparse-graph codes, and polar codes. These codes are designed primarily for discrete-alphabet channels and sources. For Gaussian channels and sources, where the alphabet is inherently continuous, Sparse Superposition Codes or Sparse Regression Codes (SPARCs) are a promising class of codes for achieving the Shannon limits.This monograph provides a unified and comprehensive over-view of sparse regression codes, covering theory, algorithms, and practical implementation aspects. The first part of the monograph focuses on SPARCs for AWGN channel coding, and the second part on SPARCs for lossy compression (with squared error distortion criterion). In the third part, SPARCs are used to construct codes for Gaussian multi-terminal channel and source coding models such as broadcast channels, multiple-access channels, and source and channel coding with side information. The monograph concludes with a discussion of open problems and directions for future work. v vi Chapter 1 A: β: 0, c 2 , 0, c L , 0, , 0 0, M columns M columns M columns Section 1 Section 2 Section L T n rows 0, c 1 , 0, 0, Figure 1.1: A Gaussian sparse regression codebook of block length n: A is a design matrix with independent Gaussian entries, and β is a sparse vector with one non-zero in each of L sections. Codewords are of the form Aβ, i.e., linear combinations of the columns corresponding to the non-zeros in β. The message is indexed by the locations of the non-zeros, and the values c 1 , . . . , c L are fixed a priori.