In real-world applications, face recognition and person re-identification are subject to image degradations such as motion blur, atmospheric turbulence, or upsampling artifacts-which are known to lower performance. This work directly addresses challenges in low-quality scenarios with 1) practical, novel updates to training and inference, which improve robustness to realistic distortions in face recognition and person re-identification and 2) new datasets for long-distance recognition. We propose a method that progressively learns from images prone to soft to strong distortions caused mainly by atmospheric turbulence. The method has a novel distortion loss to improve robustness, which is empirically shown to be highly effective in low-quality scenarios. Two further strategies are proposed to integrate distortion augmentation while also retaining the highest performance in high-quality scenarios. First, during training, an adaptive weighting schedule, which leverages the construction of different levels of distortion augmentation, is used to train the model in an easy-to-hard manner. The second, at inference, is a magnitude-weighted fusion of features from the parallel models used to retain the highest robustness across both high-quality and low-quality imagery. Different from prior art, our model does not leverage any image restoration or style transfer technique, and we are the first to employ explicit distortion weighting during training and evaluation. Our model achieves the best performances compared to prior works on face recognition and person re-identification benchmarks, including IJB-S, TinyFace, DeepChange, MSMT17, and our novel long-distance datasets.