In recent years, federated learning (FL) has gained significant attention for its ability to protect data privacy during distributed training. However, it also introduces new privacy leakage risks. Membership inference attacks (MIAs), which aim to determine whether a specific sample is part of the training dataset, pose a significant threat to federated learning. Existing research on membership inference attacks in federated learning has primarily focused on leveraging intrinsic model parameters or manipulating the training process. However, the widespread adoption of privacy-preserving frameworks in federated learning has significantly diminished the effectiveness of traditional attack methods. To overcome this limitation, this paper aims to explore an efficient Membership Inference Attack algorithm tailored for encrypted federated learning scenarios, providing new perspectives for optimizing privacy-preserving technologies. Specifically, this paper proposes a novel Membership Inference Attack algorithm based on multiple adversarial perturbation distances (MAPD_MIA) by leveraging the asymmetry in adversarial perturbation distributions near decision boundaries between member and non-member samples. By analyzing these asymmetric perturbation characteristics, the algorithm achieves accurate membership identification. Experimental results demonstrate that the proposed algorithm achieves accuracy rates of 63.0%, 68.7%, and 59.5%, and precision rates of 59.0%, 65.9%, and 55.8% on CIFAR10, CIFAR100, and MNIST datasets, respectively, outperforming three mainstream Membership Inference Attack methods. Furthermore, the algorithm exhibits robust attack performance against two common defense mechanisms, MemGuard and DP-SGD. This study provides new benchmarks and methodologies for evaluating membership privacy leakage risks in federated learning scenarios.