The network location service that efficiently obtains the network latency among large-scale nodes has been a hot topic during the last decade. With increasing numbers of participating nodes, the network location service has to balance the accuracy and the scalability. The network-coordinate methods scale well by embedding the pairwise latency into a low-dimensional coordinate system. The prediction errors are iteratively optimized by adjusting the coordinates with respect to neighbors. Unfortunately, the optimization process is vulnerable to the inaccurate coordinates, leading to destabilized positions. In this paper, we propose RMF, a relative coordinate based distributed sparsepreserving matrix-factorization method to provide guaranteed stability for the coordinate system. In RMF, each node maintains a low-rank squared matrix that is incrementally adjusted with respect to its neighbors' relative coordinates. The optimization is self-stabilizing, guaranteeing to converge and not interfered by inaccurate coordinates, since the relative coordinates do not have computational errors. By exploiting the sparse structure of the squared matrix, the optimization enforces the L1-norm regularization to preserve the sparseness of the squared matrix. Simulation results and a PlanetLab-based experiment confirm that, RMF converges to stable positions within 10 to 15 rounds, and decreases the prediction errors by 10% to 20%. 2 centralized and distributed methods. (i) Centralized methods, GNP [4] and IDES [5] require a set of centralized landmark nodes to serve as reference nodes for calculating the coordinates, which could cause performance bottlenecks as the system increases. (ii) Distributed methods, Vivaldi [6], DMF [7] and DMFSGD [9] directly let each node adjust its coordinate with respect to the coordinates of a number of sampled neighbors, which avoids the single points of failures.The network coordinate methods have to be consistently accurate under varying system conditions. For example, end hosts may join or leave the system at any time, yielding churns. A well-studied stabilization model is the self stabilization that guarantees to converge to a "legitimate" state in a bounded amount of time, regardless of the initial state [13]. We introduce a self-stabilized network coordinate model in section II. We propose to map the "legitimate" state to the local-minimum position of each network coordinate that is not interfered by system churns.Distributed network-coordinate methods face the destabilization issue: some neighbors' coordinates can be arbitrarily inaccurate due to the churns of the distributed systems, accordingly, the coordinate optimization process is easily tampered by these neighbors' positions, as shown in section VI-B. Researchers [6], [2], [8], [9] propose to use weights to estimate the accuracy of neighbors' coordinates, and to adjust the coordinate movements scaled by weights. The weights are correlated with the accuracy of neighbors' coordinates. However, calculating exact weights is challenging, since ...