Distributed learning requires a frequent communication of neural network update data. For this, we present a set of new compression tools, jointly called differential neural network coding (dNNC). dNNC is specifically tailored to efficiently code incremental neural network updates and includes tools for federated BatchNorm folding (FedBNF), structured and unstructured sparsification, tensor row skipping, quantization optimization and temporal adaptation for improved context-adaptive binary arithmetic coding (CABAC). Furthermore, dNNC provides a new parameter update tree (PUT) mechanism, which allows to identify updates for different neural network parameter sub-sets and their relationship in synchronous and asynchronous neural network communication scenarios. Most of these tools have been included into the standardization process of the NNC standard (ISO/IEC 15938-17) edition 2.We benchmark dNNC in multiple federated and split learning scenarios using a variety of NN models and data including vision transformers and large-scale ImageNet experiments: It achieves compression efficiencies of 60% in comparison to the NNC standard edition 1 for transparent coding cases, i.e., without degrading the inference or training performance. This corresponds to a reduction in the size of the NN updates to less than 1% of their original size. Moreover, dNNC reduces the overall energy consumption required for communication in federated learning systems by up to 94%.