Advances in deep learning have revolutionized medical image segmentation, facilitating the precise delineation of complex anatomical structures. The scarcity of annotated training samples remains a significant bottleneck. To tackle the data limitation, federated learning (FL) offers the promise of pooling data from multiple healthcare institutions. However, as models grow larger, the increase in communication costs restricts FL to fewer nodes, which constrains the volume of data. This situation necessitates the simultaneous achievement of model lightweighting. To address this problem, this study proposes FKD-Med, a novel framework that integrates FL for privacy-sensitive data amalgamation across multiple healthcare institutions, and uses knowledge distillation (KD) to enhance communication efficiency. The "Med" in FKD-Med refers to medical application computational problems. Our principal contributions encompass the design of an open-source framework that seamlessly blends FL and KD, rendering it applicable to a broad spectrum of medical informatics tasks. Our approach substantially augments the computational data volume, thereby boosting both communication efficiency and training throughput. Tested on two datasets of medical image segmentation using TransUNet and ResUNet as teacher models, FKD-Med achieves data privacy, lowers communication costs, and increases accuracy. The parameters of the student models were reduced to 1/127 and 1/1027 of those in the teacher models. Additionally, the models subjected to KD exhibited accuracy improvements of 0.25%, 0.43%, 1.35%, and 1.46% respectively, given the same parameter volume. This positions FKD-Med not only as a pivotal tool for multi-institutional medical research but also as a versatile platform adaptable to a wide array of real-world medical engineering applications. The code is publicly available at https://github.com/SUN-1024/FKD-Med.