In order to find a solution for accurate, topographic data-demanding applications, such as catchment hydrologic modeling and assessments of anthropic activities impact on environmental systems, high-accuracy surface modeling (HASM) method is developed. Although it can produce a digital elevation model (DEM) surface of higher accuracy than classical methods, e.g. inverse distance weighted, spline and kriging, HASM requires numerous iterations to solve large linear systems, which impede its applications in high-resolution and large-scale surface interpolation. This paper aims to demonstrate the utilization of graphics' processing units (GPUs) device to accelerate HASM in constructing large-scale and highresolution DEM surfaces. We parallelized the linear system algorithm for solving HASM with Compute Unified Device Architecture, a parallel programming model developed by NVIDIA. We designed a memory-saving strategy to enable the HASM algorithm to run on GPUs. The speedup ratio of GPU-based algorithm was tested and compared with CPUbased algorithm through simulations of both ideal Gaussian synthetic surface and real topographic surface in the loess plateau of Gansu province. The GPU-parallelized algorithm can attain an over 109 speedup ratio with the CPUbased algorithm as a reference. The speedup ratio increases with the scale and resolution of the dataset. The memory management strategy efficiently reduces the memory usage by more than eight times the grid cell number. Implementing HASM in the GPUs device enables modeling large-scale and high-resolution surfaces in a reasonable time period and implies the potential benefits from the use of GPUs as massive, parallel co-processors for arithmeticintensive data-processing applications.