a b s t r a c tThis paper considers a problem of variable selection in quantile regression with autoregressive errors. Recently, Wu and Liu (2009) investigated the oracle properties of the SCAD and adaptive-LASSO penalized quantile regressions under non identical but independent error assumption. We further relax the error assumptions so that the regression model can hold autoregressive errors, and then investigate theoretical properties for our proposed penalized quantile estimators under the relaxed assumption. Optimizing the objective function is often challenging because both quantile loss and penalty functions may be non-differentiable and/or non-concave. We adopt the concept of pseudo data by Oh et al. (2007) to implement a practical algorithm for the quantile estimate. In addition, we discuss the convergence property of the proposed algorithm. The performance of the proposed method is compared with those of the majorization-minimization algorithm (Hunter and Li, 2005) and the difference convex algorithm (Wu and Liu, 2009) through numerical and real examples.