This study examines the impact of data integrity attacks on Federated Learning (FL) for load forecasting in smart grid systems, where privacy-sensitive data require robust management. While FL provides a privacy-preserving approach to distributed model training, it remains susceptible to attacks like data poisoning, which can impair model performance. We compare Centralized Federated Learning (CFL) and Decentralized Federated Learning (DFL), using line, ring and bus topologies, under adversarial conditions. Employing a three-layer Artificial Neural Network (ANN) with substation-level datasets (APEhourly,PJMEhourly, and COMEDhourly), we evaluate the system’s resilience in the absence of anomaly detection. Results indicate that DFL significantly outperforms CFL in attack resistance, achieving Mean Absolute Percentage Errors (MAPEs) of 0.48%, 4.29% and 0.702% across datasets, compared to the CFL MAPEs of 6.07%, 18.49% and 10.19%. This demonstrates the potential of DFL as a resilient, secure solution for load forecasting in smart grids, minimizing dependence on anomaly detection to maintain data integrity.