The annotation of pathological images often introduces label noise, which can lead to overfitting and notably degrade performance. Recent studies have attempted to address this by filtering samples based on the memorization effects of DNNs. However, these methods often require prior knowledge of the noise rate or a small, clean validation subset, which is extremely difficult to obtain in real medical diagnosis processes. To reduce the effect of noisy labels, we propose a novel training strategy that enhances noise robustness without prior conditions. Specifically, our approach includes self-supervised regularization to encourage the model to focus more on the intrinsic connections between images rather than relying solely on labels. Additionally, we employ a historical prediction penalty module to ensure consistency between successive predictions, thereby slowing down the model’s shift from memorizing clean labels to memorizing noisy labels. Furthermore, we design an adaptive separation module to perform implicit sample selection and flip the labels of noisy samples identified by this module and mitigate the impact of noisy labels. Comprehensive evaluations of synthetic and real pathological datasets with varied noise levels confirm that our method outperforms state-of-the-art methods. Notably, our noise handling process does not require any prior conditions. Our method achieves highly competitive performance in low-noise scenarios which aligns with current pathological image noise situations, showcasing its potential for practical clinical applications.