Occluded person re-identification (ReID) is a challenging task as the images suffer from various obstacles and less discriminative information caused by incomplete body parts. Most current works rely on auxiliary models to infer the visible body parts and partial-level features matching to overcome the contaminated body information, which consumes extra inference time and fails when facing complex occlusions. More recently, some methods utilized masks provided from image occlusion augmentation (OA) for the supervision of mask learning. These works estimated occlusion scores for each part of the image by roughly dividing it in the horizontal direction, but cannot accurately predict the occlusion, as well as failing in vertical occlusions. To address this issue, we proposed a Smoothing Corrupted Feature Prediction (SCFP) network in an end-to-end manner for occluded person ReID. Specifically, aided by OA that simulates occlusions appearing in pedestrians and providing occlusion masks, the proposed Occlusion Decoder and Estimator (ODE) estimates and eliminates corrupted features, which is supervised by mask labels generated via restricting all occlusions into a group of patterns. We also designed an Occlusion Pattern Smoothing (OPS) to improve the performance of ODE when predicting irregular obstacles. Subsequently, a Local-to-Body (L2B) representation is constructed to mitigate the limitation of the partial body information for final matching. To investigate the performance of SCFP, we compared our model to the existing state-of-the-art methods in occluded and holistic person ReID benchmarks and proved that our method achieves superior results over the state-of-the-art methods. We also achieved the highest Rank-1 accuracies of 70.9%, 87.0%, and 93.2% in Occluded-Duke, Occluded-ReID, and P-DukeMTMC, respectively. Furthermore, the proposed SCFP generalizes well in holistic datasets, yielding accuracies of 95.8% in Market-1510 and 90.7% in DukeMTMC-reID.