Federated learning (FL) is a distributed machine learning approach where multiple clients collaboratively train a joint model without exchanging their own data. Despite FL's unprecedented success in data privacy-preserving, its vulnerability to free-rider attacks has attracted increasing attention. Numerous defense methods have been proposed for FL to defend against free-rider attacks. However, they may fail against highly camouflaged free-riders. And, their defensive effectiveness may sharply degrade when more than 20% of clients are free-riders. To address these challenges, we reconsider the defense from a novel perspective, i.e., model weight evolving frequency. Empirically, we gain a novel insight that during the FL's training, the model weight evolving frequency of free-riders and that of benign clients are significantly different. Inspired by this insight, we propose a novel defense method based on the model Weight Evolving Frequency, referred to as WEF-Defense. Specifically, we first collect the weight evolving frequency (defined as WEF-Matrix) during local training. For each client, it uploads the local model's WEF-Matrix to the server together with its model weight for each iteration. The server then separates free-riders from benign clients based on the difference in the WEF-Matrix. Finally, the server uses a personalized approach to provide different global models for corresponding clients, which prevents free-riders from gaining high-quality models. Comprehensive experiments conducted on five datasets and five models demonstrate that WEF-Defense achieves better defense effectiveness (∼×1.4) than the state-of-the-art baselines and identifies free-riders at an earlier stage of training. Besides, we verify the effectiveness of WEF-Defense against an adaptive attack and visualize the WEF-Matrix during the training to interpret its effectiveness. The data and code of WEF-Defense are available at: https: //github.com/research-limingjun/WEF-Defense.git.