Although robust learning and local differential privacy are both widely studied fields of research, combining the two settings is an almost unexplored topic. We consider the problem of estimating a discrete distribution in total variation from n contaminated data batches under a local differential privacy constraint. A fraction 1 − of the batches contain k i.i.d. samples drawn from a discrete distribution p over d elements. To protect the users' privacy, each of the samples is privatized using an α-locally differentially private mechanism. The remaining n batches are an adversarial contamination. The minimax rate of estimation under contamination alone, with no privacy, is known to be / √ k + d/kn, up to a log(1/ ) factor. Under the privacy constraint alone, the minimax rate of estimation is d 2 /α 2 kn. We show that combining the two constraints leads to a minimax estimation rate of d/α 2 k + d 2 /α 2 kn up to a log(1/ ) factor, larger than the sum of the two separate rates. We provide a polynomial-time algorithm achieving this bound, as well as a matching information theoretic lower bound.