As one kind of distributed machine learning technique, federated learning enables multiple clients to build a model across decentralized datacollaboratively without explicitly aggregating the data. Due to its abilityto break data silos, federated learning has received increasing attentionin many fields, including finance, healthcare, and education. However,the invisibility of clients’ training data and the local training process result in some security issues. Recently, many works have beenproposed to research the security attacks and defenses in federatedlearning, but there has been no special survey on poisoning attacks onfederated learning and the corresponding defenses. In this paper, weinvestigate the most advanced schemes on federated learning poisoningattacks and defenses and point out the future directions in these areas.