This paper is concerned with the computation of the high-dimensional zero-norm penalized quantile regression (QR) estimator, which is defined as a global minimizer of the zero-norm penalized check loss minimization. To seek a desirable approximation to this estimator, we reformulate this NP-hard lower semi-continuous problem as an equivalent augmented Lipschitz optimization problem, and exploit its coupled structure to propose a multi-stage convex relaxation approach (MSCRA_PPA). The MSCRA_PPA solves inexactly in each step a weighted ℓ 1 -regularized check loss minimization problem with a proximal dual semismooth Newton method. Under a mild restricted strong convexity condition, we provide the theoretical guarantee for the MSCRA_PPA by establishing the error bound of each iterate to the true estimator and achieving the rate of linear convergence in a statistical sense. Numerical comparisons on some synthetic and real data with MSCRA_IPM and MSCRA_ADMM (two MSCRAs with the subproblems solved by an interior point method and a semiproximal ADMM, respectively) show that MSCRA_PPA has comparable estimation performance with the latter two methods and requires only half (respectively, onethird) of the time required by MSCRA_ADMM (respectively, MSCRA_IPM).