Decentralized Federated Learning improves data privacy and eliminates single points of failure by removing reliance on centralized storage and model aggregation in distributed computing systems. Ensuring the integrity of computations during local model training is a significant challenge, especially before sharing gradient updates from each local client. Current methods for ensuring computation integrity often involve patching local models to implement cryptographic techniques, such as Zero-Knowledge Proofs. However, this approach becomes highly complex and sometimes impractical for large-scale models that use techniques such as random dropouts to improve training convergence. These random dropouts create non-deterministic behavior, making it challenging to verify model updates under deterministic protocols. We propose ProxyZKP, a novel framework combining Zero-Knowledge Proofs with polynomial proxy models to provide computation integrity in local training to address this issue. Each local node combines a private model for online deep learning applications and a proxy model that mediates decentralized model training by exchanging gradient updates. The multivariate polynomial nature of proxy models facilitates the application of Zero-Knowledge Proofs. These proofs verify the computation integrity of updates from each node without disclosing private data. Experimental results indicate that ProxyZKP significantly reduces computational load. Specifically, ProxyZKP achieves proof generation times that are 30–50% faster compared to established methods like zk-SNARKs and Bulletproofs. This improvement is largely due to the high parallelization potential of the univariate polynomial decomposition approach. Additionally, integrating Differential Privacy into the ProxyZKP framework reduces the risk of Gradient Inversion attacks by adding calibrated noise to the gradients, while maintaining competitive model accuracy. The results demonstrate that ProxyZKP is a scalable and efficient solution for ensuring training integrity in decentralized federated learning environments, particularly in scenarios with frequent model updates and the need for strong model scalability.