Graphical models are a rich language for describing high-dimensional distributions in terms of their dependence structure. While there are algorithms with provable guarantees for learning undirected graphical models in a variety of settings, there has been much less progress in the important scenario when there are latent variables. Here we study Restricted Boltzmann Machines (or RBMs), which are a popular model with wide-ranging applications in dimensionality reduction, collaborative filtering, topic modeling, feature extraction and deep learning.The main message of our paper is a strong dichotomy in the feasibility of learning RBMs, depending on the nature of the interactions between variables: ferromagnetic models can be learned efficiently, while general models cannot. In particular, we give a simple greedy algorithm based on influence maximization to learn ferromagnetic RBMs with bounded degree. In fact, we learn a description of the distribution on the observed variables as a Markov Random Field. Our analysis is based on tools from mathematical physics that were developed to show the concavity of magnetization. Our algorithm extends straighforwardly to general ferromagnetic Ising models with latent variables.Conversely, we show that even for a contant number of latent variables with constant degree, without ferromagneticity the problem is as hard as sparse parity with noise. This hardness result is based on a sharp and surprising characterization of the representational power of bounded degree RBMs: the distribution on their observed variables can simulate any bounded order MRF. This result is of independent interest since RBMs are the building blocks of deep belief networks.Lemma 6.2. Suppose X i is the spin at vertex i in an (α, β)-nondegenerate Ising model and j is a neighbor of i. Then for any fixing x =i,j of the other spins X i =j of the Ising model, we haveSince tanh (x) = 1 − tanh 2 (x) and tanh is a monotone function, we see that if we let x = −J ij + k:k / ∈{i,j} J ik x k , then since x ∈ [−β, β] we have | tanh(x + 2J ij ) − tanh(x)| ≥ 2|J ij | inf x∈[−β,β](1 − tanh 2 (x)) ≥ 2α(1 − tanh 2 (β)) .