This article proposes a distributed accelerated algorithm for solving distributed optimization problems which are defined in a time-varying directed network. Different from the existing algorithms, by implementing row-and column-stochastic matrices, this work eliminates the conservatism in the related work due to doubly-stochastic matrices, and do not required estimate the Perron eigenvector of a stochastic matrix. Assuming that the global objective function is strongly convex and the gradient of each local objective function is Lipschitz-continuous, it is proved that the algorithm converges linearly to the global optimization solution with proper uncoordinated step-sizes and momentum parameters. The numerical simulations are utilized to verify the correctness of the theoretical results and show the practicability of the proposed algorithm.