Deep learning has achieved remarkable successes in many industry applications and scientific research fields. One essential reason is that deep models can learn rich information from large-scale training datasets through supervised learning. It has been well accepted that the robust deep models heavily rely on the quality of data labels. However, current large-scale datasets mostly involve noisy labels, which are caused by sensor errors, human mistakes, or inaccuracy of search engines, and may severely degrade the performance of deep models. In this survey, we summarize existing works on noisy label learning into two main categories, loss correction and sample selection, and present their methodologies, commonly used experimental setups, datasets, and the state-of-the-art results. Finally, we discuss a promising research direction that might be valuable for the future study.