With the increasing complexity of neural network models, the huge communication overhead in federated learning (FL) has become a significant issue. To mitigate resource consumption, incorporating pruning algorithms into federated learning has emerged as a promising approach. However, existing pruning algorithms exhibit high sensitivity to network architectures and typically require multiple sessions of retraining to identify optimal structures. The direct application of such strategies to FL would inevitably introduce an additional communication cost. To this end, we propose a novel communication-efficient federated learning framework, DualPFL (Dual Sparse Pruning Federated Learning), designed to address these issues by implementing dynamic sparse pruning and adaptive model aggregation strategies. The experimental results demonstrate that, compared to similar works, our framework can improve convergence speed by more than two times under non-IID data, achieving up to 84% accuracy on the CIFAR-10 dataset, 95% mean average precision (mAP) on the COCO dataset using YOLOv8, and 96% accuracy on the TT100K traffic sign datasets. These findings indicate that DualPFL facilitates secure and efficient collaborative computing in smart city applications.