With the rapid development of artificial intelligence and Internet of Things (IoT) technologies, automotive companies are integrating federated learning into connected vehicles to provide users with smarter services. Federated learning enables vehicles to collaboratively train a global model without sharing sensitive local data, thereby mitigating privacy risks. However, the dynamic and open nature of the Internet of Vehicles (IoV) makes it vulnerable to potential attacks, where attackers may intercept or tamper with transmitted local model parameters, compromising their integrity and exposing user privacy. Although existing solutions like differential privacy and encryption can address these issues, they may reduce data usability or increase computational complexity. To tackle these challenges, we propose a conditional privacy-preserving identity-authentication scheme, CPPA-SM2, to provide privacy protection for federated learning. Unlike existing methods, CPPA-SM2 allows vehicles to participate in training anonymously, thereby achieving efficient privacy protection. Performance evaluations and experimental results demonstrate that, compared to state-of-the-art schemes, CPPA-SM2 significantly reduces the overhead of signing, verification and communication while achieving more security features.