We study the learnability of sums of independent integer random variables given a bound on the size of the union of their supports. For A ⊂ Z + , a sum of independent random variables with collective support A (called an A-sum in this paper) is a distribution S = X 1 + · · · + X N where the X i 's are mutually independent (but not necessarily identically distributed) integer random variables with ∪ i supp(X i ) ⊆ A.We give two main algorithmic results for learning such distributions:1. For the case |A| = 3, we give an algorithm for learning A-sums to accuracy ε that uses poly(1/ε) samples and runs in time poly(1/ε), independent of N and of the elements of A.2. For an arbitrary constant k ≥ 4, if A = {a 1 , ..., a k } with 0 ≤ a 1 < ... < a k , we give an algorithm that uses poly(1/ε) · log log a k samples (independent of N ) and runs in time poly(1/ε, log a k ).We prove an essentially matching lower bound: if |A| = 4, then any algorithm must use Ω(log log a 4 ) samples even for learning to constant accuracy. We also give similar-in-spirit (but quantitatively very different) algorithmic results, and essentially matching lower bounds, for the case in which A is not known to the learner. Our learning algorithms employ new limit theorems which may be of independent interest. Our lower bounds rely on equidistribution type results from number theory. Our algorithms and lower bounds together settle the question of how the sample complexity of learning sums of independent integer random variables scales with the elements in the union of their supports, both in the known-support and unknownsupport settings. Finally, all our algorithms easily extend to the "semi-agnostic" learning model, in which training data is generated from a distribution that is only cε-close to some A-sum for a constant c > 0.Secondary algorithmic results: Learning with unknown support. We also give algorithms for a more challenging unknown-support variant of the learning problem. In this variant the values a 1 , . . . , a k are not provided to the learning algorithm, but instead only an upper bound a max ≥ a k is given. Interestingly, it turns out that the unknown-support problem is significantly different from the known-support problem: as explained below, in the unknown-support variant the dependence on a max kicks in at a smaller value of k 1 Here and throughout we assume a unit-cost model for arithmetic operations +, ×, ÷. 4 than in the known-support variant, and this dependence is exponentially more severe than in the knownsupport variant.Using well-known results from hypothesis selection, it is straightforward to show that upper bounds for the known-support case yield upper bounds in the unknown-support case, essentially at the cost of an additional additive O(k log a max )/ε 2 term in the sample complexity. This immediately yields the following:Theorem 3 (Learning with unknown support of size k). For any k ≥ 3, there is an algorithm and a positive constant c with the following properties: The algorithm is given N , the value k, an accuracy pa...