We give the first representation-independent hardness results for PAC learning intersections of halfspaces, a central concept class in computational learning theory. Our hardness results are derived from two public-key cryptosystems due to Regev, which are based on the worst-case hardness of well-studied lattice problems. Specifically, we prove that a polynomial-time algorithm for PAC learning intersections of n halfspaces (for a constant > 0) in n dimensions would yield a polynomial-time solution tõ O(n 1.5 )-uSVP (unique shortest vector problem). We also prove that PAC learning intersections of n low-weight halfspaces would yield a polynomial-time quantum solution tõ O(n 1.5 )-SVP andÕ(n 1.5 )-SIVP (shortest vector problem and shortest independent vector problem, respectively). By making stronger assumptions about the hardness of uSVP, SVP, and SIVP, we show that there is no polynomial-time algorithm for learning intersections of log c n halfspaces in n dimensions, for c > 0 sufficiently large. Our approach also yields the first representation-independent hardness results for learning polynomial-size depth-2 neural networks and polynomial-size depth-3 arithmetic circuits.