The distributed and privacy‐preserving attributes of fine‐grained smart grid data create obstacles to data sharing. As a result, federated learning emerges as an effective strategy for collaborative training in distributed load forecasting. However, poisoning attacks can interfere with training in the federated learning aggregation process, making it challenging to ensure the accuracy and safety of the global model in load forecasting. Therefore, the authors propose a secure aggregation federated learning method based on similarity and distance (Fed‐SAD) for distributed load forecasting. The server determines approximate global model parameters based on the similarity of the model parameters of the participants and aggregates the global model parameters using a distance‐based weighting method. Fed‐SAD effectively reduces the interference of poisoning attacks by securely aggregating models in short‐term load forecasting. Experimental results show that using the Fed‐SAD results in mean absolute percentage error (MAPE) reduction for certain participants by 19% on sign flipping attacks, MAPE reduction by 15% on additive noise attacks, and MAPE reduction by 4.5% without attack, compared to the Federated Average Aggregation algorithm. Furthermore, Fed‐SAD consistently maintains robustness and has excellent attack detection accuracy.