The combination of federated learning and recommender system aims to solve the privacy problems of recommendation through keeping user data locally at the client device during the model training session. However, most existing approaches rely on user devices to fully compute the deep model designed for the large-scale item recommendation; therefore, imposing high calculation and communication overheads on resource-constrained user devices. Consequently, achieving efficient federated recommendations across ubiquitous mobile devices remains an open research problem. To this end, in this paper we propose an efficient and privacy-preserving federated learning framework which is based on the cloud-edge collaboration for large-scale item recommendation called SpFedRec. In our method, to reduce the computation and communication cost of the federated two-tower model, a split learning approach is applied to migrate the item model from participants’ edge devices to the computationally powerful cloud side and compress item data while transmitting. Meanwhile, to enhance the feature representation, the Squeeze-and-Excitation network mechanism is used on the backbone model to optimize the perception of dominant features. Moreover, because the gradients transmitted contain private information about the user; therefore, we propose a multi-party circular secret-sharing chain based on secret sharing for better privacy protection. Extensive experiments using plausible assumptions on two real-world datasets demonstrate that our proposed method improves the average computation time and communication cost by 23% and 49%, respectively. Furthermore, the proposed model accomplishes comparable performance with other state-of-art federated recommendation models.