Multimedia retrieval models need the ability to extract useful information from large-scale data for clients. As an important part of multimedia retrieval, image classification model directly affects the efficiency and effect of multimedia retrieval. We need a lot of data to train a image classification model applied to multimedia retrieval task. However, with the protection of data privacy, the data used to train the model often needs to be kept on the client side. Federated learning is proposed to use data from all clients to train one model while protecting privacy. When federated learning is applied, the distribution of data across different clients varies greatly. Disregarding this problem yields a final model with unstable performance. To enable federated learning to work dependably in the real world with complex data environments, we propose FedNKD, which utilizes knowledge distillation and random noise. The superior knowledge of each client is distilled into a central server to mitigate the instablity caused by Non-IID data. Importantly, a synthetic dataset is created by some random noise through back propagation of neural networks. The synthetic dataset will contain the abstract features of the real data. Then we will use this synthetic dataset to realize the knowledge distillation while protecting users' privacy. In our experimental scenarios, FedNKD outperforms existing representative algorithms by about 1.