As a distributed training method, federated learning (FL) has been widely used in the field of quality-of-service (QoS) prediction. However, existing FL-based QoS prediction methods ignore the unreliability of end devices, which will lead to wasted training resources and high communication costs. Considering that the instability of end devices in real training environments, we propose a low cost semi-asynchronous federated learning method (LCSA-Fed) based on lag tolerance to overcome the lower convergence rate and suboptimal prediction accuracy of models. LCSA-Fed is able to reduce model communication costs and training costs by tolerating relatively lagging local models. At the same time, we employ innovations in both the user selection phase and the model aggregation phase to improve prediction accuracy while reducing overheads. By conducting relevant validation experiments on a publicly available QoS dataset, we conclude that our model LCSA-Fed can effectively reduce overhead and improve prediction accuracy.