The varying growth rates within a group of pigs present a significant challenge for the current all-in-all-out systems in the pig industry. This study evaluated the applicability of statistical methods for classifying pigs at risk of growth retardation at different production stages using a robust dataset collected under commercial conditions. Data from 26,749 crossbred pigs (Yorkshire x Landrace) with Duroc at weaning (17-27 d), 15,409 pigs at the end of the nursery period (60-78 d), and 4,996 pigs at slaughter (151-161 d) were analyzed under three different cut points (lowest 10%, 20% and 30% weights) to characterize light animals. Records were randomly split into training and testing sets in a 2:1 ratio, and each training dataset was analyzed using an ordinary least squares approach and three machine learning algorithms (decision tree, random forest, and generalized boosted regression). The classification performance of each analytical approach was evaluated by the area under the curve (AUC). In all production stages and cut points, the random forest and generalized boosted regression models demonstrated superior classification performance, with AUC estimates ranging from 0.772 to 0.861. The parametric linear model also showed acceptable classification performance, with slightly lower AUC estimates ranging from 0.752 to 0.818. In contrast, the single decision tree was categorized as worthless, with AUC estimates between 0.608 and 0.726. Key prediction factors varied across production stages, with birthweight-related factors being most significant at weaning, and weight at previous stages becoming more crucial later in the production cycle. These findings suggest the potential of machine learning algorithms to improve decision-making and efficiency in pig production systems by accurately identifying pigs at risk of growth retardation.