Federated learning (FL) is gradually becoming a key learning paradigm in Privacy-preserving Machine Learning (ML) systems. In FL, a large number of clients cooperate with a central server to learn a shared model without sharing their own data sets.However, since there is a great disparity between the client data sets, standard FL is often hard to tune and suffers from performance degradation due to the inharmony among local models. To this end, in this paper we propose a novel FL scheme, termed client reputation federated learning (CRFL), which dynamically assesses the reputation of the clients participating in FL. Our method leverages techniques from model explanation, and aims at precisely measure each client's impact to the global model. To be specific, we first calculate the saliency-weighted variance on pixelwise relevance scores as the quality factor of a single sample. Then we extract activation function values at the last hidden layer to compute the divergence factor of individual data set. Finally, the server integrates these two factors as an assessment of the client reputation. By leveraging such assessment, CRFL can dynamically adjust the weights of the clients in each aggregation round, thus leading to a significant improvement over the baseline method in terms of model accuracy and