WiFi-based human gesture recognition has a wide range of applications in smart homes. Existing methods train gesture classification models by collecting large amounts of WiFi signal data in a centralized manner. However, centralized training faces challenges, including high communication overhead and the risk of data privacy leakage. Federated learning (FL) provides an opportunity to collaboratively train and share models without compromising data privacy. One of the main challenges FL faces is data that is non-Independent and Identically Distributed (non-IID) across clients. Specifically, in the gesture recognition scenario, since the transmission of WiFi signals is susceptible to cross-environment and cross-person interference, non-IID mainly manifests itself as a cross-domain problem. Cross-domain makes the knowledge learned between client models incompatible. Therefore, in the cross-domain scenario, effectively extracting and combining the knowledge learned by the client is a challenge. To solve this problem, we propose pFedBKD, a novel personalized federated learning scheme via bidirectional distillation. First, the knowledge that is beneficial to the client is extracted from the shared server model through knowledge distillation in the client, which helps train the personalized model of the client. Second, the server adaptively adjusts the aggregation weights according to the deviation between the shared model and the client’s local model so that the server’s shared model can extract more public knowledge. We conduct experiments on multiple open-source datasets. Experimental results show that our method is superior to existing methods and effectively alleviates the problem of reduced model recognition accuracy caused by cross-domain challenges.